00:00:00.001 Started by upstream project "autotest-per-patch" build number 124186 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.032 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.032 The recommended git tool is: git 00:00:00.033 using credential 00000000-0000-0000-0000-000000000002 00:00:00.034 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.046 Fetching changes from the remote Git repository 00:00:00.048 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.068 Using shallow fetch with depth 1 00:00:00.068 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.068 > git --version # timeout=10 00:00:00.095 > git --version # 'git version 2.39.2' 00:00:00.095 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.163 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.163 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.462 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.471 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.481 Checking out Revision 9bbc799d7020f50509d938dbe97dc05da0c1b5c3 (FETCH_HEAD) 00:00:03.481 > git config core.sparsecheckout # timeout=10 00:00:03.489 > git read-tree -mu HEAD # timeout=10 00:00:03.501 > git checkout -f 9bbc799d7020f50509d938dbe97dc05da0c1b5c3 # timeout=5 00:00:03.516 Commit message: "pool: fixes for VisualBuild class" 00:00:03.516 > git rev-list --no-walk 9bbc799d7020f50509d938dbe97dc05da0c1b5c3 # timeout=10 00:00:03.612 [Pipeline] Start of Pipeline 00:00:03.627 [Pipeline] library 00:00:03.629 Loading library shm_lib@master 00:00:03.629 Library shm_lib@master is cached. Copying from home. 00:00:03.648 [Pipeline] node 00:00:03.661 Running on WFP20 in /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:03.662 [Pipeline] { 00:00:03.673 [Pipeline] catchError 00:00:03.674 [Pipeline] { 00:00:03.683 [Pipeline] wrap 00:00:03.691 [Pipeline] { 00:00:03.697 [Pipeline] stage 00:00:03.698 [Pipeline] { (Prologue) 00:00:03.878 [Pipeline] sh 00:00:04.159 + logger -p user.info -t JENKINS-CI 00:00:04.173 [Pipeline] echo 00:00:04.174 Node: WFP20 00:00:04.182 [Pipeline] sh 00:00:04.482 [Pipeline] setCustomBuildProperty 00:00:04.494 [Pipeline] echo 00:00:04.496 Cleanup processes 00:00:04.502 [Pipeline] sh 00:00:04.783 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:04.783 4006973 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:04.794 [Pipeline] sh 00:00:05.075 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:05.075 ++ grep -v 'sudo pgrep' 00:00:05.075 ++ awk '{print $1}' 00:00:05.075 + sudo kill -9 00:00:05.075 + true 00:00:05.087 [Pipeline] cleanWs 00:00:05.096 [WS-CLEANUP] Deleting project workspace... 00:00:05.096 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.101 [WS-CLEANUP] done 00:00:05.104 [Pipeline] setCustomBuildProperty 00:00:05.116 [Pipeline] sh 00:00:05.392 + sudo git config --global --replace-all safe.directory '*' 00:00:05.467 [Pipeline] nodesByLabel 00:00:05.469 Found a total of 2 nodes with the 'sorcerer' label 00:00:05.479 [Pipeline] httpRequest 00:00:05.484 HttpMethod: GET 00:00:05.484 URL: http://10.211.164.101/packages/jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:05.488 Sending request to url: http://10.211.164.101/packages/jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:05.498 Response Code: HTTP/1.1 200 OK 00:00:05.498 Success: Status code 200 is in the accepted range: 200,404 00:00:05.499 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:08.214 [Pipeline] sh 00:00:08.494 + tar --no-same-owner -xf jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:08.510 [Pipeline] httpRequest 00:00:08.514 HttpMethod: GET 00:00:08.515 URL: http://10.211.164.101/packages/spdk_86abcfbbd67c7df8b6bcf1187d52e5b3aaa15ca9.tar.gz 00:00:08.515 Sending request to url: http://10.211.164.101/packages/spdk_86abcfbbd67c7df8b6bcf1187d52e5b3aaa15ca9.tar.gz 00:00:08.533 Response Code: HTTP/1.1 200 OK 00:00:08.534 Success: Status code 200 is in the accepted range: 200,404 00:00:08.534 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk_86abcfbbd67c7df8b6bcf1187d52e5b3aaa15ca9.tar.gz 00:01:06.250 [Pipeline] sh 00:01:06.532 + tar --no-same-owner -xf spdk_86abcfbbd67c7df8b6bcf1187d52e5b3aaa15ca9.tar.gz 00:01:09.831 [Pipeline] sh 00:01:10.112 + git -C spdk log --oneline -n5 00:01:10.112 86abcfbbd bdev_nvme: add debugging code to discovery path to debug issue #3401 00:01:10.112 f16e9f4d2 lib/event: framework_get_reactors supports getting pid and tid 00:01:10.112 2d610abe8 lib/env_dpdk: add spdk_get_tid function 00:01:10.112 f470a0dc6 event: do not call reactor events from spdk_thread context 00:01:10.113 8d3fdcaba nvmf: cleanup maximum number of subsystem namespace remanent code 00:01:10.125 [Pipeline] } 00:01:10.145 [Pipeline] // stage 00:01:10.154 [Pipeline] stage 00:01:10.156 [Pipeline] { (Prepare) 00:01:10.173 [Pipeline] writeFile 00:01:10.192 [Pipeline] sh 00:01:10.468 + logger -p user.info -t JENKINS-CI 00:01:10.481 [Pipeline] sh 00:01:10.763 + logger -p user.info -t JENKINS-CI 00:01:10.776 [Pipeline] sh 00:01:11.055 + cat autorun-spdk.conf 00:01:11.056 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:11.056 SPDK_TEST_FUZZER_SHORT=1 00:01:11.056 SPDK_TEST_FUZZER=1 00:01:11.056 SPDK_RUN_UBSAN=1 00:01:11.062 RUN_NIGHTLY=0 00:01:11.066 [Pipeline] readFile 00:01:11.082 [Pipeline] withEnv 00:01:11.083 [Pipeline] { 00:01:11.092 [Pipeline] sh 00:01:11.369 + set -ex 00:01:11.369 + [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf ]] 00:01:11.369 + source /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:01:11.369 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:11.369 ++ SPDK_TEST_FUZZER_SHORT=1 00:01:11.369 ++ SPDK_TEST_FUZZER=1 00:01:11.369 ++ SPDK_RUN_UBSAN=1 00:01:11.369 ++ RUN_NIGHTLY=0 00:01:11.369 + case $SPDK_TEST_NVMF_NICS in 00:01:11.369 + DRIVERS= 00:01:11.369 + [[ -n '' ]] 00:01:11.369 + exit 0 00:01:11.379 [Pipeline] } 00:01:11.398 [Pipeline] // withEnv 00:01:11.403 [Pipeline] } 00:01:11.419 [Pipeline] // stage 00:01:11.428 [Pipeline] catchError 00:01:11.430 [Pipeline] { 00:01:11.448 [Pipeline] timeout 00:01:11.448 Timeout set to expire in 30 min 00:01:11.450 [Pipeline] { 00:01:11.466 [Pipeline] stage 00:01:11.467 [Pipeline] { (Tests) 00:01:11.480 [Pipeline] sh 00:01:11.760 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/short-fuzz-phy-autotest 00:01:11.760 ++ readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest 00:01:11.760 + DIR_ROOT=/var/jenkins/workspace/short-fuzz-phy-autotest 00:01:11.760 + [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest ]] 00:01:11.760 + DIR_SPDK=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:01:11.760 + DIR_OUTPUT=/var/jenkins/workspace/short-fuzz-phy-autotest/output 00:01:11.761 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk ]] 00:01:11.761 + [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:01:11.761 + mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/output 00:01:11.761 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:01:11.761 + [[ short-fuzz-phy-autotest == pkgdep-* ]] 00:01:11.761 + cd /var/jenkins/workspace/short-fuzz-phy-autotest 00:01:11.761 + source /etc/os-release 00:01:11.761 ++ NAME='Fedora Linux' 00:01:11.761 ++ VERSION='38 (Cloud Edition)' 00:01:11.761 ++ ID=fedora 00:01:11.761 ++ VERSION_ID=38 00:01:11.761 ++ VERSION_CODENAME= 00:01:11.761 ++ PLATFORM_ID=platform:f38 00:01:11.761 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:11.761 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:11.761 ++ LOGO=fedora-logo-icon 00:01:11.761 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:11.761 ++ HOME_URL=https://fedoraproject.org/ 00:01:11.761 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:11.761 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:11.761 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:11.761 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:11.761 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:11.761 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:11.761 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:11.761 ++ SUPPORT_END=2024-05-14 00:01:11.761 ++ VARIANT='Cloud Edition' 00:01:11.761 ++ VARIANT_ID=cloud 00:01:11.761 + uname -a 00:01:11.761 Linux spdk-wfp-20 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:11.761 + sudo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:01:15.955 Hugepages 00:01:15.955 node hugesize free / total 00:01:15.955 node0 1048576kB 0 / 0 00:01:15.955 node0 2048kB 0 / 0 00:01:15.955 node1 1048576kB 0 / 0 00:01:15.955 node1 2048kB 0 / 0 00:01:15.955 00:01:15.955 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:15.955 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:15.955 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:15.955 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:15.955 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:15.955 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:15.955 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:15.955 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:15.955 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:15.955 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:15.955 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:15.955 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:15.955 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:15.955 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:15.955 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:15.955 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:15.955 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:15.955 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:15.955 + rm -f /tmp/spdk-ld-path 00:01:15.955 + source autorun-spdk.conf 00:01:15.955 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:15.955 ++ SPDK_TEST_FUZZER_SHORT=1 00:01:15.955 ++ SPDK_TEST_FUZZER=1 00:01:15.955 ++ SPDK_RUN_UBSAN=1 00:01:15.955 ++ RUN_NIGHTLY=0 00:01:15.955 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:15.955 + [[ -n '' ]] 00:01:15.955 + sudo git config --global --add safe.directory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:01:15.955 + for M in /var/spdk/build-*-manifest.txt 00:01:15.955 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:15.955 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:01:15.955 + for M in /var/spdk/build-*-manifest.txt 00:01:15.955 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:15.955 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:01:15.955 ++ uname 00:01:15.955 + [[ Linux == \L\i\n\u\x ]] 00:01:15.955 + sudo dmesg -T 00:01:15.955 + sudo dmesg --clear 00:01:15.955 + dmesg_pid=4008585 00:01:15.955 + [[ Fedora Linux == FreeBSD ]] 00:01:15.955 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:15.955 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:15.955 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:15.955 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:15.955 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:15.955 + [[ -x /usr/src/fio-static/fio ]] 00:01:15.955 + sudo dmesg -Tw 00:01:15.955 + export FIO_BIN=/usr/src/fio-static/fio 00:01:15.955 + FIO_BIN=/usr/src/fio-static/fio 00:01:15.955 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\s\h\o\r\t\-\f\u\z\z\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:15.955 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:15.955 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:15.955 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:15.955 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:15.955 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:15.955 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:15.955 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:15.955 + spdk/autorun.sh /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:01:15.955 Test configuration: 00:01:15.955 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:15.955 SPDK_TEST_FUZZER_SHORT=1 00:01:15.955 SPDK_TEST_FUZZER=1 00:01:15.955 SPDK_RUN_UBSAN=1 00:01:15.955 RUN_NIGHTLY=0 22:52:07 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:01:15.955 22:52:07 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:15.955 22:52:07 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:15.955 22:52:07 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:15.955 22:52:07 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:15.955 22:52:07 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:15.955 22:52:07 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:15.955 22:52:07 -- paths/export.sh@5 -- $ export PATH 00:01:15.955 22:52:07 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:15.955 22:52:07 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:01:15.955 22:52:07 -- common/autobuild_common.sh@437 -- $ date +%s 00:01:15.955 22:52:07 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1717793527.XXXXXX 00:01:15.955 22:52:07 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1717793527.DdzskS 00:01:15.955 22:52:07 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:01:15.955 22:52:07 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:01:15.955 22:52:07 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/' 00:01:15.956 22:52:07 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:15.956 22:52:07 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:15.956 22:52:07 -- common/autobuild_common.sh@453 -- $ get_config_params 00:01:15.956 22:52:07 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:15.956 22:52:07 -- common/autotest_common.sh@10 -- $ set +x 00:01:15.956 22:52:08 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:15.956 22:52:08 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:01:15.956 22:52:08 -- pm/common@17 -- $ local monitor 00:01:15.956 22:52:08 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:15.956 22:52:08 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:15.956 22:52:08 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:15.956 22:52:08 -- pm/common@21 -- $ date +%s 00:01:15.956 22:52:08 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:15.956 22:52:08 -- pm/common@21 -- $ date +%s 00:01:15.956 22:52:08 -- pm/common@21 -- $ date +%s 00:01:15.956 22:52:08 -- pm/common@25 -- $ sleep 1 00:01:15.956 22:52:08 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1717793528 00:01:15.956 22:52:08 -- pm/common@21 -- $ date +%s 00:01:15.956 22:52:08 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1717793528 00:01:15.956 22:52:08 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1717793528 00:01:15.956 22:52:08 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1717793528 00:01:15.956 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1717793528_collect-vmstat.pm.log 00:01:15.956 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1717793528_collect-cpu-load.pm.log 00:01:15.956 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1717793528_collect-cpu-temp.pm.log 00:01:15.956 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1717793528_collect-bmc-pm.bmc.pm.log 00:01:16.894 22:52:09 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:01:16.894 22:52:09 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:16.894 22:52:09 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:16.894 22:52:09 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:01:16.894 22:52:09 -- spdk/autobuild.sh@16 -- $ date -u 00:01:16.894 Fri Jun 7 08:52:09 PM UTC 2024 00:01:16.894 22:52:09 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:16.894 v24.09-pre-53-g86abcfbbd 00:01:16.894 22:52:09 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:16.894 22:52:09 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:16.894 22:52:09 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:16.894 22:52:09 -- common/autotest_common.sh@1100 -- $ '[' 3 -le 1 ']' 00:01:16.894 22:52:09 -- common/autotest_common.sh@1106 -- $ xtrace_disable 00:01:16.894 22:52:09 -- common/autotest_common.sh@10 -- $ set +x 00:01:16.894 ************************************ 00:01:16.894 START TEST ubsan 00:01:16.894 ************************************ 00:01:16.894 22:52:09 ubsan -- common/autotest_common.sh@1124 -- $ echo 'using ubsan' 00:01:16.894 using ubsan 00:01:16.894 00:01:16.894 real 0m0.001s 00:01:16.894 user 0m0.000s 00:01:16.894 sys 0m0.000s 00:01:16.894 22:52:09 ubsan -- common/autotest_common.sh@1125 -- $ xtrace_disable 00:01:16.894 22:52:09 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:16.894 ************************************ 00:01:16.894 END TEST ubsan 00:01:16.894 ************************************ 00:01:16.894 22:52:09 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:16.894 22:52:09 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:16.894 22:52:09 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:16.894 22:52:09 -- spdk/autobuild.sh@51 -- $ [[ 1 -eq 1 ]] 00:01:16.894 22:52:09 -- spdk/autobuild.sh@52 -- $ llvm_precompile 00:01:16.894 22:52:09 -- common/autobuild_common.sh@425 -- $ run_test autobuild_llvm_precompile _llvm_precompile 00:01:16.894 22:52:09 -- common/autotest_common.sh@1100 -- $ '[' 2 -le 1 ']' 00:01:16.894 22:52:09 -- common/autotest_common.sh@1106 -- $ xtrace_disable 00:01:16.894 22:52:09 -- common/autotest_common.sh@10 -- $ set +x 00:01:16.894 ************************************ 00:01:16.894 START TEST autobuild_llvm_precompile 00:01:16.894 ************************************ 00:01:16.894 22:52:09 autobuild_llvm_precompile -- common/autotest_common.sh@1124 -- $ _llvm_precompile 00:01:16.894 22:52:09 autobuild_llvm_precompile -- common/autobuild_common.sh@32 -- $ clang --version 00:01:17.154 22:52:09 autobuild_llvm_precompile -- common/autobuild_common.sh@32 -- $ [[ clang version 16.0.6 (Fedora 16.0.6-3.fc38) 00:01:17.154 Target: x86_64-redhat-linux-gnu 00:01:17.154 Thread model: posix 00:01:17.154 InstalledDir: /usr/bin =~ version (([0-9]+).([0-9]+).([0-9]+)) ]] 00:01:17.154 22:52:09 autobuild_llvm_precompile -- common/autobuild_common.sh@33 -- $ clang_num=16 00:01:17.154 22:52:09 autobuild_llvm_precompile -- common/autobuild_common.sh@35 -- $ export CC=clang-16 00:01:17.154 22:52:09 autobuild_llvm_precompile -- common/autobuild_common.sh@35 -- $ CC=clang-16 00:01:17.154 22:52:09 autobuild_llvm_precompile -- common/autobuild_common.sh@36 -- $ export CXX=clang++-16 00:01:17.154 22:52:09 autobuild_llvm_precompile -- common/autobuild_common.sh@36 -- $ CXX=clang++-16 00:01:17.154 22:52:09 autobuild_llvm_precompile -- common/autobuild_common.sh@38 -- $ fuzzer_libs=(/usr/lib*/clang/@("$clang_num"|"$clang_version")/lib/*linux*/libclang_rt.fuzzer_no_main?(-x86_64).a) 00:01:17.154 22:52:09 autobuild_llvm_precompile -- common/autobuild_common.sh@39 -- $ fuzzer_lib=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a 00:01:17.154 22:52:09 autobuild_llvm_precompile -- common/autobuild_common.sh@40 -- $ [[ -e /usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a ]] 00:01:17.154 22:52:09 autobuild_llvm_precompile -- common/autobuild_common.sh@42 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a' 00:01:17.154 22:52:09 autobuild_llvm_precompile -- common/autobuild_common.sh@44 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a 00:01:17.413 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:01:17.413 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:01:17.673 Using 'verbs' RDMA provider 00:01:34.032 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:48.925 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:49.184 Creating mk/config.mk...done. 00:01:49.184 Creating mk/cc.flags.mk...done. 00:01:49.184 Type 'make' to build. 00:01:49.184 00:01:49.184 real 0m32.107s 00:01:49.184 user 0m14.440s 00:01:49.184 sys 0m17.102s 00:01:49.185 22:52:41 autobuild_llvm_precompile -- common/autotest_common.sh@1125 -- $ xtrace_disable 00:01:49.185 22:52:41 autobuild_llvm_precompile -- common/autotest_common.sh@10 -- $ set +x 00:01:49.185 ************************************ 00:01:49.185 END TEST autobuild_llvm_precompile 00:01:49.185 ************************************ 00:01:49.185 22:52:41 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:49.185 22:52:41 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:49.185 22:52:41 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:49.185 22:52:41 -- spdk/autobuild.sh@62 -- $ [[ 1 -eq 1 ]] 00:01:49.185 22:52:41 -- spdk/autobuild.sh@64 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a 00:01:49.444 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:01:49.444 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:01:50.014 Using 'verbs' RDMA provider 00:02:05.839 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:18.057 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:18.057 Creating mk/config.mk...done. 00:02:18.057 Creating mk/cc.flags.mk...done. 00:02:18.057 Type 'make' to build. 00:02:18.057 22:53:08 -- spdk/autobuild.sh@69 -- $ run_test make make -j112 00:02:18.057 22:53:08 -- common/autotest_common.sh@1100 -- $ '[' 3 -le 1 ']' 00:02:18.057 22:53:08 -- common/autotest_common.sh@1106 -- $ xtrace_disable 00:02:18.057 22:53:08 -- common/autotest_common.sh@10 -- $ set +x 00:02:18.057 ************************************ 00:02:18.057 START TEST make 00:02:18.057 ************************************ 00:02:18.057 22:53:08 make -- common/autotest_common.sh@1124 -- $ make -j112 00:02:18.057 make[1]: Nothing to be done for 'all'. 00:02:18.992 The Meson build system 00:02:18.992 Version: 1.3.1 00:02:18.992 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user 00:02:18.992 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:18.992 Build type: native build 00:02:18.992 Project name: libvfio-user 00:02:18.992 Project version: 0.0.1 00:02:18.992 C compiler for the host machine: clang-16 (clang 16.0.6 "clang version 16.0.6 (Fedora 16.0.6-3.fc38)") 00:02:18.992 C linker for the host machine: clang-16 ld.bfd 2.39-16 00:02:18.992 Host machine cpu family: x86_64 00:02:18.992 Host machine cpu: x86_64 00:02:18.992 Run-time dependency threads found: YES 00:02:18.992 Library dl found: YES 00:02:18.992 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:18.992 Run-time dependency json-c found: YES 0.17 00:02:18.992 Run-time dependency cmocka found: YES 1.1.7 00:02:18.992 Program pytest-3 found: NO 00:02:18.992 Program flake8 found: NO 00:02:18.992 Program misspell-fixer found: NO 00:02:18.992 Program restructuredtext-lint found: NO 00:02:18.992 Program valgrind found: YES (/usr/bin/valgrind) 00:02:18.992 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:18.992 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:18.992 Compiler for C supports arguments -Wwrite-strings: YES 00:02:18.992 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:18.992 Program test-lspci.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:18.992 Program test-linkage.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:18.992 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:18.992 Build targets in project: 8 00:02:18.992 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:18.992 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:18.992 00:02:18.992 libvfio-user 0.0.1 00:02:18.992 00:02:18.992 User defined options 00:02:18.992 buildtype : debug 00:02:18.992 default_library: static 00:02:18.992 libdir : /usr/local/lib 00:02:18.992 00:02:18.992 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:19.250 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:19.508 [1/36] Compiling C object lib/libvfio-user.a.p/tran.c.o 00:02:19.508 [2/36] Compiling C object samples/lspci.p/lspci.c.o 00:02:19.508 [3/36] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:19.508 [4/36] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:19.508 [5/36] Compiling C object lib/libvfio-user.a.p/irq.c.o 00:02:19.508 [6/36] Compiling C object lib/libvfio-user.a.p/migration.c.o 00:02:19.508 [7/36] Compiling C object samples/null.p/null.c.o 00:02:19.508 [8/36] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:19.508 [9/36] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:19.508 [10/36] Compiling C object lib/libvfio-user.a.p/pci.c.o 00:02:19.508 [11/36] Compiling C object lib/libvfio-user.a.p/tran_sock.c.o 00:02:19.508 [12/36] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:19.508 [13/36] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:19.508 [14/36] Compiling C object lib/libvfio-user.a.p/dma.c.o 00:02:19.508 [15/36] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:19.509 [16/36] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:19.509 [17/36] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:19.509 [18/36] Compiling C object lib/libvfio-user.a.p/pci_caps.c.o 00:02:19.509 [19/36] Compiling C object test/unit_tests.p/mocks.c.o 00:02:19.509 [20/36] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:19.509 [21/36] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:19.509 [22/36] Compiling C object samples/server.p/server.c.o 00:02:19.509 [23/36] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:19.509 [24/36] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:19.509 [25/36] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:19.509 [26/36] Compiling C object samples/client.p/client.c.o 00:02:19.509 [27/36] Compiling C object lib/libvfio-user.a.p/libvfio-user.c.o 00:02:19.509 [28/36] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:19.509 [29/36] Linking static target lib/libvfio-user.a 00:02:19.509 [30/36] Linking target samples/client 00:02:19.509 [31/36] Linking target test/unit_tests 00:02:19.509 [32/36] Linking target samples/null 00:02:19.509 [33/36] Linking target samples/gpio-pci-idio-16 00:02:19.509 [34/36] Linking target samples/server 00:02:19.509 [35/36] Linking target samples/shadow_ioeventfd_server 00:02:19.509 [36/36] Linking target samples/lspci 00:02:19.509 INFO: autodetecting backend as ninja 00:02:19.509 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:19.767 DESTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:20.025 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:20.025 ninja: no work to do. 00:02:26.584 The Meson build system 00:02:26.584 Version: 1.3.1 00:02:26.584 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk 00:02:26.584 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp 00:02:26.584 Build type: native build 00:02:26.584 Program cat found: YES (/usr/bin/cat) 00:02:26.584 Project name: DPDK 00:02:26.584 Project version: 24.03.0 00:02:26.584 C compiler for the host machine: clang-16 (clang 16.0.6 "clang version 16.0.6 (Fedora 16.0.6-3.fc38)") 00:02:26.584 C linker for the host machine: clang-16 ld.bfd 2.39-16 00:02:26.584 Host machine cpu family: x86_64 00:02:26.584 Host machine cpu: x86_64 00:02:26.584 Message: ## Building in Developer Mode ## 00:02:26.584 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:26.584 Program check-symbols.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:26.584 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:26.584 Program python3 found: YES (/usr/bin/python3) 00:02:26.584 Program cat found: YES (/usr/bin/cat) 00:02:26.584 Compiler for C supports arguments -march=native: YES 00:02:26.584 Checking for size of "void *" : 8 00:02:26.584 Checking for size of "void *" : 8 (cached) 00:02:26.584 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:02:26.584 Library m found: YES 00:02:26.584 Library numa found: YES 00:02:26.584 Has header "numaif.h" : YES 00:02:26.584 Library fdt found: NO 00:02:26.584 Library execinfo found: NO 00:02:26.584 Has header "execinfo.h" : YES 00:02:26.584 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:26.584 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:26.584 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:26.584 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:26.584 Run-time dependency openssl found: YES 3.0.9 00:02:26.584 Run-time dependency libpcap found: YES 1.10.4 00:02:26.584 Has header "pcap.h" with dependency libpcap: YES 00:02:26.584 Compiler for C supports arguments -Wcast-qual: YES 00:02:26.584 Compiler for C supports arguments -Wdeprecated: YES 00:02:26.584 Compiler for C supports arguments -Wformat: YES 00:02:26.584 Compiler for C supports arguments -Wformat-nonliteral: YES 00:02:26.584 Compiler for C supports arguments -Wformat-security: YES 00:02:26.584 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:26.584 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:26.584 Compiler for C supports arguments -Wnested-externs: YES 00:02:26.584 Compiler for C supports arguments -Wold-style-definition: YES 00:02:26.584 Compiler for C supports arguments -Wpointer-arith: YES 00:02:26.584 Compiler for C supports arguments -Wsign-compare: YES 00:02:26.584 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:26.584 Compiler for C supports arguments -Wundef: YES 00:02:26.584 Compiler for C supports arguments -Wwrite-strings: YES 00:02:26.584 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:26.584 Compiler for C supports arguments -Wno-packed-not-aligned: NO 00:02:26.584 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:26.584 Program objdump found: YES (/usr/bin/objdump) 00:02:26.584 Compiler for C supports arguments -mavx512f: YES 00:02:26.584 Checking if "AVX512 checking" compiles: YES 00:02:26.584 Fetching value of define "__SSE4_2__" : 1 00:02:26.584 Fetching value of define "__AES__" : 1 00:02:26.584 Fetching value of define "__AVX__" : 1 00:02:26.584 Fetching value of define "__AVX2__" : 1 00:02:26.584 Fetching value of define "__AVX512BW__" : 1 00:02:26.584 Fetching value of define "__AVX512CD__" : 1 00:02:26.584 Fetching value of define "__AVX512DQ__" : 1 00:02:26.584 Fetching value of define "__AVX512F__" : 1 00:02:26.584 Fetching value of define "__AVX512VL__" : 1 00:02:26.584 Fetching value of define "__PCLMUL__" : 1 00:02:26.584 Fetching value of define "__RDRND__" : 1 00:02:26.584 Fetching value of define "__RDSEED__" : 1 00:02:26.584 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:26.584 Fetching value of define "__znver1__" : (undefined) 00:02:26.584 Fetching value of define "__znver2__" : (undefined) 00:02:26.584 Fetching value of define "__znver3__" : (undefined) 00:02:26.584 Fetching value of define "__znver4__" : (undefined) 00:02:26.585 Compiler for C supports arguments -Wno-format-truncation: NO 00:02:26.585 Message: lib/log: Defining dependency "log" 00:02:26.585 Message: lib/kvargs: Defining dependency "kvargs" 00:02:26.585 Message: lib/telemetry: Defining dependency "telemetry" 00:02:26.585 Checking for function "getentropy" : NO 00:02:26.585 Message: lib/eal: Defining dependency "eal" 00:02:26.585 Message: lib/ring: Defining dependency "ring" 00:02:26.585 Message: lib/rcu: Defining dependency "rcu" 00:02:26.585 Message: lib/mempool: Defining dependency "mempool" 00:02:26.585 Message: lib/mbuf: Defining dependency "mbuf" 00:02:26.585 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:26.585 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:26.585 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:26.585 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:26.585 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:26.585 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:26.585 Compiler for C supports arguments -mpclmul: YES 00:02:26.585 Compiler for C supports arguments -maes: YES 00:02:26.585 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:26.585 Compiler for C supports arguments -mavx512bw: YES 00:02:26.585 Compiler for C supports arguments -mavx512dq: YES 00:02:26.585 Compiler for C supports arguments -mavx512vl: YES 00:02:26.585 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:26.585 Compiler for C supports arguments -mavx2: YES 00:02:26.585 Compiler for C supports arguments -mavx: YES 00:02:26.585 Message: lib/net: Defining dependency "net" 00:02:26.585 Message: lib/meter: Defining dependency "meter" 00:02:26.585 Message: lib/ethdev: Defining dependency "ethdev" 00:02:26.585 Message: lib/pci: Defining dependency "pci" 00:02:26.585 Message: lib/cmdline: Defining dependency "cmdline" 00:02:26.585 Message: lib/hash: Defining dependency "hash" 00:02:26.585 Message: lib/timer: Defining dependency "timer" 00:02:26.585 Message: lib/compressdev: Defining dependency "compressdev" 00:02:26.585 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:26.585 Message: lib/dmadev: Defining dependency "dmadev" 00:02:26.585 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:26.585 Message: lib/power: Defining dependency "power" 00:02:26.585 Message: lib/reorder: Defining dependency "reorder" 00:02:26.585 Message: lib/security: Defining dependency "security" 00:02:26.585 Has header "linux/userfaultfd.h" : YES 00:02:26.585 Has header "linux/vduse.h" : YES 00:02:26.585 Message: lib/vhost: Defining dependency "vhost" 00:02:26.585 Compiler for C supports arguments -Wno-format-truncation: NO (cached) 00:02:26.585 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:26.585 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:26.585 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:26.585 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:26.585 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:26.585 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:26.585 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:26.585 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:26.585 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:26.585 Program doxygen found: YES (/usr/bin/doxygen) 00:02:26.585 Configuring doxy-api-html.conf using configuration 00:02:26.585 Configuring doxy-api-man.conf using configuration 00:02:26.585 Program mandb found: YES (/usr/bin/mandb) 00:02:26.585 Program sphinx-build found: NO 00:02:26.585 Configuring rte_build_config.h using configuration 00:02:26.585 Message: 00:02:26.585 ================= 00:02:26.585 Applications Enabled 00:02:26.585 ================= 00:02:26.585 00:02:26.585 apps: 00:02:26.585 00:02:26.585 00:02:26.585 Message: 00:02:26.585 ================= 00:02:26.585 Libraries Enabled 00:02:26.585 ================= 00:02:26.585 00:02:26.585 libs: 00:02:26.585 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:26.585 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:26.585 cryptodev, dmadev, power, reorder, security, vhost, 00:02:26.585 00:02:26.585 Message: 00:02:26.585 =============== 00:02:26.585 Drivers Enabled 00:02:26.585 =============== 00:02:26.585 00:02:26.585 common: 00:02:26.585 00:02:26.585 bus: 00:02:26.585 pci, vdev, 00:02:26.585 mempool: 00:02:26.585 ring, 00:02:26.585 dma: 00:02:26.585 00:02:26.585 net: 00:02:26.585 00:02:26.585 crypto: 00:02:26.585 00:02:26.585 compress: 00:02:26.585 00:02:26.585 vdpa: 00:02:26.585 00:02:26.585 00:02:26.585 Message: 00:02:26.585 ================= 00:02:26.585 Content Skipped 00:02:26.585 ================= 00:02:26.585 00:02:26.585 apps: 00:02:26.585 dumpcap: explicitly disabled via build config 00:02:26.585 graph: explicitly disabled via build config 00:02:26.585 pdump: explicitly disabled via build config 00:02:26.585 proc-info: explicitly disabled via build config 00:02:26.585 test-acl: explicitly disabled via build config 00:02:26.585 test-bbdev: explicitly disabled via build config 00:02:26.585 test-cmdline: explicitly disabled via build config 00:02:26.585 test-compress-perf: explicitly disabled via build config 00:02:26.585 test-crypto-perf: explicitly disabled via build config 00:02:26.585 test-dma-perf: explicitly disabled via build config 00:02:26.585 test-eventdev: explicitly disabled via build config 00:02:26.585 test-fib: explicitly disabled via build config 00:02:26.585 test-flow-perf: explicitly disabled via build config 00:02:26.585 test-gpudev: explicitly disabled via build config 00:02:26.585 test-mldev: explicitly disabled via build config 00:02:26.585 test-pipeline: explicitly disabled via build config 00:02:26.585 test-pmd: explicitly disabled via build config 00:02:26.585 test-regex: explicitly disabled via build config 00:02:26.585 test-sad: explicitly disabled via build config 00:02:26.585 test-security-perf: explicitly disabled via build config 00:02:26.585 00:02:26.585 libs: 00:02:26.585 argparse: explicitly disabled via build config 00:02:26.585 metrics: explicitly disabled via build config 00:02:26.585 acl: explicitly disabled via build config 00:02:26.585 bbdev: explicitly disabled via build config 00:02:26.585 bitratestats: explicitly disabled via build config 00:02:26.585 bpf: explicitly disabled via build config 00:02:26.585 cfgfile: explicitly disabled via build config 00:02:26.585 distributor: explicitly disabled via build config 00:02:26.585 efd: explicitly disabled via build config 00:02:26.585 eventdev: explicitly disabled via build config 00:02:26.585 dispatcher: explicitly disabled via build config 00:02:26.585 gpudev: explicitly disabled via build config 00:02:26.585 gro: explicitly disabled via build config 00:02:26.585 gso: explicitly disabled via build config 00:02:26.585 ip_frag: explicitly disabled via build config 00:02:26.585 jobstats: explicitly disabled via build config 00:02:26.585 latencystats: explicitly disabled via build config 00:02:26.585 lpm: explicitly disabled via build config 00:02:26.585 member: explicitly disabled via build config 00:02:26.585 pcapng: explicitly disabled via build config 00:02:26.585 rawdev: explicitly disabled via build config 00:02:26.585 regexdev: explicitly disabled via build config 00:02:26.585 mldev: explicitly disabled via build config 00:02:26.585 rib: explicitly disabled via build config 00:02:26.585 sched: explicitly disabled via build config 00:02:26.585 stack: explicitly disabled via build config 00:02:26.585 ipsec: explicitly disabled via build config 00:02:26.585 pdcp: explicitly disabled via build config 00:02:26.585 fib: explicitly disabled via build config 00:02:26.585 port: explicitly disabled via build config 00:02:26.585 pdump: explicitly disabled via build config 00:02:26.585 table: explicitly disabled via build config 00:02:26.585 pipeline: explicitly disabled via build config 00:02:26.585 graph: explicitly disabled via build config 00:02:26.585 node: explicitly disabled via build config 00:02:26.585 00:02:26.585 drivers: 00:02:26.585 common/cpt: not in enabled drivers build config 00:02:26.585 common/dpaax: not in enabled drivers build config 00:02:26.585 common/iavf: not in enabled drivers build config 00:02:26.585 common/idpf: not in enabled drivers build config 00:02:26.585 common/ionic: not in enabled drivers build config 00:02:26.585 common/mvep: not in enabled drivers build config 00:02:26.585 common/octeontx: not in enabled drivers build config 00:02:26.585 bus/auxiliary: not in enabled drivers build config 00:02:26.585 bus/cdx: not in enabled drivers build config 00:02:26.585 bus/dpaa: not in enabled drivers build config 00:02:26.585 bus/fslmc: not in enabled drivers build config 00:02:26.585 bus/ifpga: not in enabled drivers build config 00:02:26.585 bus/platform: not in enabled drivers build config 00:02:26.585 bus/uacce: not in enabled drivers build config 00:02:26.585 bus/vmbus: not in enabled drivers build config 00:02:26.585 common/cnxk: not in enabled drivers build config 00:02:26.585 common/mlx5: not in enabled drivers build config 00:02:26.585 common/nfp: not in enabled drivers build config 00:02:26.585 common/nitrox: not in enabled drivers build config 00:02:26.585 common/qat: not in enabled drivers build config 00:02:26.585 common/sfc_efx: not in enabled drivers build config 00:02:26.585 mempool/bucket: not in enabled drivers build config 00:02:26.585 mempool/cnxk: not in enabled drivers build config 00:02:26.585 mempool/dpaa: not in enabled drivers build config 00:02:26.585 mempool/dpaa2: not in enabled drivers build config 00:02:26.585 mempool/octeontx: not in enabled drivers build config 00:02:26.585 mempool/stack: not in enabled drivers build config 00:02:26.585 dma/cnxk: not in enabled drivers build config 00:02:26.585 dma/dpaa: not in enabled drivers build config 00:02:26.585 dma/dpaa2: not in enabled drivers build config 00:02:26.585 dma/hisilicon: not in enabled drivers build config 00:02:26.585 dma/idxd: not in enabled drivers build config 00:02:26.585 dma/ioat: not in enabled drivers build config 00:02:26.585 dma/skeleton: not in enabled drivers build config 00:02:26.585 net/af_packet: not in enabled drivers build config 00:02:26.585 net/af_xdp: not in enabled drivers build config 00:02:26.585 net/ark: not in enabled drivers build config 00:02:26.586 net/atlantic: not in enabled drivers build config 00:02:26.586 net/avp: not in enabled drivers build config 00:02:26.586 net/axgbe: not in enabled drivers build config 00:02:26.586 net/bnx2x: not in enabled drivers build config 00:02:26.586 net/bnxt: not in enabled drivers build config 00:02:26.586 net/bonding: not in enabled drivers build config 00:02:26.586 net/cnxk: not in enabled drivers build config 00:02:26.586 net/cpfl: not in enabled drivers build config 00:02:26.586 net/cxgbe: not in enabled drivers build config 00:02:26.586 net/dpaa: not in enabled drivers build config 00:02:26.586 net/dpaa2: not in enabled drivers build config 00:02:26.586 net/e1000: not in enabled drivers build config 00:02:26.586 net/ena: not in enabled drivers build config 00:02:26.586 net/enetc: not in enabled drivers build config 00:02:26.586 net/enetfec: not in enabled drivers build config 00:02:26.586 net/enic: not in enabled drivers build config 00:02:26.586 net/failsafe: not in enabled drivers build config 00:02:26.586 net/fm10k: not in enabled drivers build config 00:02:26.586 net/gve: not in enabled drivers build config 00:02:26.586 net/hinic: not in enabled drivers build config 00:02:26.586 net/hns3: not in enabled drivers build config 00:02:26.586 net/i40e: not in enabled drivers build config 00:02:26.586 net/iavf: not in enabled drivers build config 00:02:26.586 net/ice: not in enabled drivers build config 00:02:26.586 net/idpf: not in enabled drivers build config 00:02:26.586 net/igc: not in enabled drivers build config 00:02:26.586 net/ionic: not in enabled drivers build config 00:02:26.586 net/ipn3ke: not in enabled drivers build config 00:02:26.586 net/ixgbe: not in enabled drivers build config 00:02:26.586 net/mana: not in enabled drivers build config 00:02:26.586 net/memif: not in enabled drivers build config 00:02:26.586 net/mlx4: not in enabled drivers build config 00:02:26.586 net/mlx5: not in enabled drivers build config 00:02:26.586 net/mvneta: not in enabled drivers build config 00:02:26.586 net/mvpp2: not in enabled drivers build config 00:02:26.586 net/netvsc: not in enabled drivers build config 00:02:26.586 net/nfb: not in enabled drivers build config 00:02:26.586 net/nfp: not in enabled drivers build config 00:02:26.586 net/ngbe: not in enabled drivers build config 00:02:26.586 net/null: not in enabled drivers build config 00:02:26.586 net/octeontx: not in enabled drivers build config 00:02:26.586 net/octeon_ep: not in enabled drivers build config 00:02:26.586 net/pcap: not in enabled drivers build config 00:02:26.586 net/pfe: not in enabled drivers build config 00:02:26.586 net/qede: not in enabled drivers build config 00:02:26.586 net/ring: not in enabled drivers build config 00:02:26.586 net/sfc: not in enabled drivers build config 00:02:26.586 net/softnic: not in enabled drivers build config 00:02:26.586 net/tap: not in enabled drivers build config 00:02:26.586 net/thunderx: not in enabled drivers build config 00:02:26.586 net/txgbe: not in enabled drivers build config 00:02:26.586 net/vdev_netvsc: not in enabled drivers build config 00:02:26.586 net/vhost: not in enabled drivers build config 00:02:26.586 net/virtio: not in enabled drivers build config 00:02:26.586 net/vmxnet3: not in enabled drivers build config 00:02:26.586 raw/*: missing internal dependency, "rawdev" 00:02:26.586 crypto/armv8: not in enabled drivers build config 00:02:26.586 crypto/bcmfs: not in enabled drivers build config 00:02:26.586 crypto/caam_jr: not in enabled drivers build config 00:02:26.586 crypto/ccp: not in enabled drivers build config 00:02:26.586 crypto/cnxk: not in enabled drivers build config 00:02:26.586 crypto/dpaa_sec: not in enabled drivers build config 00:02:26.586 crypto/dpaa2_sec: not in enabled drivers build config 00:02:26.586 crypto/ipsec_mb: not in enabled drivers build config 00:02:26.586 crypto/mlx5: not in enabled drivers build config 00:02:26.586 crypto/mvsam: not in enabled drivers build config 00:02:26.586 crypto/nitrox: not in enabled drivers build config 00:02:26.586 crypto/null: not in enabled drivers build config 00:02:26.586 crypto/octeontx: not in enabled drivers build config 00:02:26.586 crypto/openssl: not in enabled drivers build config 00:02:26.586 crypto/scheduler: not in enabled drivers build config 00:02:26.586 crypto/uadk: not in enabled drivers build config 00:02:26.586 crypto/virtio: not in enabled drivers build config 00:02:26.586 compress/isal: not in enabled drivers build config 00:02:26.586 compress/mlx5: not in enabled drivers build config 00:02:26.586 compress/nitrox: not in enabled drivers build config 00:02:26.586 compress/octeontx: not in enabled drivers build config 00:02:26.586 compress/zlib: not in enabled drivers build config 00:02:26.586 regex/*: missing internal dependency, "regexdev" 00:02:26.586 ml/*: missing internal dependency, "mldev" 00:02:26.586 vdpa/ifc: not in enabled drivers build config 00:02:26.586 vdpa/mlx5: not in enabled drivers build config 00:02:26.586 vdpa/nfp: not in enabled drivers build config 00:02:26.586 vdpa/sfc: not in enabled drivers build config 00:02:26.586 event/*: missing internal dependency, "eventdev" 00:02:26.586 baseband/*: missing internal dependency, "bbdev" 00:02:26.586 gpu/*: missing internal dependency, "gpudev" 00:02:26.586 00:02:26.586 00:02:26.586 Build targets in project: 85 00:02:26.586 00:02:26.586 DPDK 24.03.0 00:02:26.586 00:02:26.586 User defined options 00:02:26.586 buildtype : debug 00:02:26.586 default_library : static 00:02:26.586 libdir : lib 00:02:26.586 prefix : /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:02:26.586 c_args : -fPIC -Werror 00:02:26.586 c_link_args : 00:02:26.586 cpu_instruction_set: native 00:02:26.586 disable_apps : test-sad,graph,test-regex,dumpcap,test-eventdev,test-compress-perf,pdump,test-security-perf,test-pmd,test-flow-perf,test-pipeline,test-crypto-perf,test-gpudev,test-cmdline,test-dma-perf,proc-info,test-bbdev,test-acl,test,test-mldev,test-fib 00:02:26.586 disable_libs : sched,port,dispatcher,graph,rawdev,pdcp,bitratestats,ipsec,pcapng,pdump,gso,cfgfile,gpudev,ip_frag,node,distributor,mldev,lpm,acl,bpf,latencystats,eventdev,regexdev,gro,stack,fib,argparse,pipeline,bbdev,table,metrics,member,jobstats,efd,rib 00:02:26.586 enable_docs : false 00:02:26.586 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:26.586 enable_kmods : false 00:02:26.586 tests : false 00:02:26.586 00:02:26.586 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:26.844 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp' 00:02:27.113 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:27.113 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:27.113 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:27.113 [4/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:27.113 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:27.113 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:27.113 [7/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:27.113 [8/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:27.113 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:27.113 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:27.113 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:27.113 [12/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:27.113 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:27.113 [14/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:27.113 [15/268] Linking static target lib/librte_kvargs.a 00:02:27.113 [16/268] Linking static target lib/librte_log.a 00:02:27.113 [17/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:27.113 [18/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:27.113 [19/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:27.113 [20/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:27.113 [21/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:27.113 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:27.113 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:27.113 [24/268] Linking static target lib/librte_pci.a 00:02:27.113 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:27.113 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:27.113 [27/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:27.113 [28/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:27.113 [29/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:27.113 [30/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:27.113 [31/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:27.113 [32/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:27.370 [33/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:27.370 [34/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:27.370 [35/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:27.370 [36/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.628 [37/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.628 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:27.628 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:27.628 [40/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:27.628 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:27.628 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:27.628 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:27.628 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:27.628 [45/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:27.628 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:27.628 [47/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:27.628 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:27.628 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:27.628 [50/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:27.628 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:27.628 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:27.628 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:27.628 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:27.628 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:27.628 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:27.628 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:27.628 [58/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:27.628 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:27.628 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:27.628 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:27.628 [62/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:27.628 [63/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:27.628 [64/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:27.628 [65/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:27.628 [66/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:27.628 [67/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:27.628 [68/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:27.628 [69/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:27.628 [70/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:27.628 [71/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:27.628 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:27.628 [73/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:27.629 [74/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:27.629 [75/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:27.629 [76/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:27.629 [77/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:27.629 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:27.629 [79/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:27.629 [80/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:27.629 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:27.629 [82/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:27.629 [83/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:27.629 [84/268] Linking static target lib/librte_telemetry.a 00:02:27.629 [85/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:27.629 [86/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:27.629 [87/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:27.887 [88/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:27.887 [89/268] Linking static target lib/librte_meter.a 00:02:27.887 [90/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:27.887 [91/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:27.887 [92/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:27.887 [93/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:27.887 [94/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:27.887 [95/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:27.887 [96/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:27.887 [97/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:27.887 [98/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:27.887 [99/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:27.887 [100/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:27.887 [101/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:27.887 [102/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:27.887 [103/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:27.887 [104/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:27.887 [105/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:27.887 [106/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:27.887 [107/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:27.887 [108/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:27.887 [109/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:27.887 [110/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:27.887 [111/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:27.887 [112/268] Linking static target lib/librte_cmdline.a 00:02:27.887 [113/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:27.887 [114/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:27.887 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:27.887 [116/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:27.887 [117/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:27.887 [118/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:27.887 [119/268] Linking static target lib/librte_ring.a 00:02:27.887 [120/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.887 [121/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:27.887 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:27.887 [123/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:27.887 [124/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:27.887 [125/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:27.887 [126/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:27.888 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:27.888 [128/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:27.888 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:27.888 [130/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:27.888 [131/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:27.888 [132/268] Linking static target lib/librte_timer.a 00:02:27.888 [133/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:27.888 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:27.888 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:27.888 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:27.888 [137/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:27.888 [138/268] Linking static target lib/librte_mempool.a 00:02:27.888 [139/268] Linking static target lib/librte_eal.a 00:02:27.888 [140/268] Linking static target lib/librte_rcu.a 00:02:27.888 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:27.888 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:27.888 [143/268] Linking target lib/librte_log.so.24.1 00:02:27.888 [144/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:27.888 [145/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:27.888 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:27.888 [147/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:27.888 [148/268] Linking static target lib/librte_net.a 00:02:27.888 [149/268] Linking static target lib/librte_compressdev.a 00:02:27.888 [150/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:27.888 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:27.888 [152/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:27.888 [153/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:27.888 [154/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:27.888 [155/268] Linking static target lib/librte_dmadev.a 00:02:27.888 [156/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:28.147 [157/268] Linking static target lib/librte_hash.a 00:02:28.147 [158/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:28.147 [159/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:28.147 [160/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.147 [161/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:28.147 [162/268] Linking static target lib/librte_mbuf.a 00:02:28.147 [163/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:28.147 [164/268] Linking target lib/librte_kvargs.so.24.1 00:02:28.147 [165/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:28.147 [166/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:28.147 [167/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:28.147 [168/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:28.147 [169/268] Linking static target lib/librte_security.a 00:02:28.147 [170/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:28.147 [171/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:28.147 [172/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:28.147 [173/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:28.147 [174/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:28.147 [175/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:28.147 [176/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:28.147 [177/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:28.147 [178/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:28.147 [179/268] Linking static target lib/librte_reorder.a 00:02:28.147 [180/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:28.147 [181/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.406 [182/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:28.406 [183/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:28.406 [184/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:28.406 [185/268] Linking static target lib/librte_power.a 00:02:28.406 [186/268] Linking static target lib/librte_cryptodev.a 00:02:28.406 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:28.406 [188/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:28.406 [189/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.406 [190/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.406 [191/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:28.406 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:28.406 [193/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:28.406 [194/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.406 [195/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:28.406 [196/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:28.406 [197/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.406 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:28.406 [199/268] Linking target lib/librte_telemetry.so.24.1 00:02:28.406 [200/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:28.406 [201/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:28.406 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:28.406 [203/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:28.406 [204/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:28.406 [205/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:28.665 [206/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:28.665 [207/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:28.665 [208/268] Linking static target drivers/librte_bus_vdev.a 00:02:28.665 [209/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:28.665 [210/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:28.665 [211/268] Linking static target drivers/librte_mempool_ring.a 00:02:28.665 [212/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:28.665 [213/268] Linking static target drivers/librte_bus_pci.a 00:02:28.665 [214/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:28.665 [215/268] Linking static target lib/librte_ethdev.a 00:02:28.665 [216/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:28.665 [217/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.665 [218/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.924 [219/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.924 [220/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.924 [221/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.924 [222/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.924 [223/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.185 [224/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.185 [225/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.527 [226/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.527 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.527 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:29.527 [229/268] Linking static target lib/librte_vhost.a 00:02:30.904 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.840 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.406 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.939 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.198 [234/268] Linking target lib/librte_eal.so.24.1 00:02:41.198 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:41.455 [236/268] Linking target lib/librte_ring.so.24.1 00:02:41.455 [237/268] Linking target lib/librte_timer.so.24.1 00:02:41.455 [238/268] Linking target lib/librte_meter.so.24.1 00:02:41.455 [239/268] Linking target lib/librte_dmadev.so.24.1 00:02:41.455 [240/268] Linking target lib/librte_pci.so.24.1 00:02:41.455 [241/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:41.455 [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:41.455 [243/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:41.455 [244/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:41.455 [245/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:41.455 [246/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:41.455 [247/268] Linking target lib/librte_rcu.so.24.1 00:02:41.455 [248/268] Linking target lib/librte_mempool.so.24.1 00:02:41.455 [249/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:41.713 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:41.713 [251/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:41.713 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:41.713 [253/268] Linking target lib/librte_mbuf.so.24.1 00:02:41.971 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:41.971 [255/268] Linking target lib/librte_net.so.24.1 00:02:41.971 [256/268] Linking target lib/librte_compressdev.so.24.1 00:02:41.971 [257/268] Linking target lib/librte_cryptodev.so.24.1 00:02:41.971 [258/268] Linking target lib/librte_reorder.so.24.1 00:02:42.230 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:42.230 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:42.230 [261/268] Linking target lib/librte_cmdline.so.24.1 00:02:42.230 [262/268] Linking target lib/librte_hash.so.24.1 00:02:42.230 [263/268] Linking target lib/librte_security.so.24.1 00:02:42.230 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:42.488 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:42.488 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:42.488 [267/268] Linking target lib/librte_power.so.24.1 00:02:42.488 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:42.488 INFO: autodetecting backend as ninja 00:02:42.488 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp -j 112 00:02:43.423 CC lib/ut_mock/mock.o 00:02:43.423 CC lib/ut/ut.o 00:02:43.423 CC lib/log/log.o 00:02:43.423 CC lib/log/log_flags.o 00:02:43.423 CC lib/log/log_deprecated.o 00:02:43.682 LIB libspdk_ut.a 00:02:43.682 LIB libspdk_ut_mock.a 00:02:43.682 LIB libspdk_log.a 00:02:43.940 CXX lib/trace_parser/trace.o 00:02:43.940 CC lib/ioat/ioat.o 00:02:43.940 CC lib/util/bit_array.o 00:02:43.940 CC lib/util/base64.o 00:02:43.940 CC lib/util/crc16.o 00:02:43.940 CC lib/util/cpuset.o 00:02:43.940 CC lib/dma/dma.o 00:02:43.940 CC lib/util/crc32.o 00:02:43.940 CC lib/util/crc32c.o 00:02:43.940 CC lib/util/crc32_ieee.o 00:02:43.940 CC lib/util/crc64.o 00:02:43.940 CC lib/util/dif.o 00:02:43.940 CC lib/util/fd.o 00:02:43.940 CC lib/util/file.o 00:02:43.940 CC lib/util/hexlify.o 00:02:43.940 CC lib/util/iov.o 00:02:43.940 CC lib/util/math.o 00:02:43.940 CC lib/util/pipe.o 00:02:43.940 CC lib/util/strerror_tls.o 00:02:43.940 CC lib/util/string.o 00:02:43.940 CC lib/util/uuid.o 00:02:43.940 CC lib/util/fd_group.o 00:02:43.940 CC lib/util/xor.o 00:02:43.940 CC lib/util/zipf.o 00:02:44.199 CC lib/vfio_user/host/vfio_user_pci.o 00:02:44.199 CC lib/vfio_user/host/vfio_user.o 00:02:44.199 LIB libspdk_dma.a 00:02:44.199 LIB libspdk_ioat.a 00:02:44.457 LIB libspdk_vfio_user.a 00:02:44.457 LIB libspdk_util.a 00:02:44.714 LIB libspdk_trace_parser.a 00:02:44.714 CC lib/vmd/vmd.o 00:02:44.714 CC lib/vmd/led.o 00:02:44.714 CC lib/env_dpdk/env.o 00:02:44.714 CC lib/env_dpdk/memory.o 00:02:44.714 CC lib/env_dpdk/init.o 00:02:44.714 CC lib/env_dpdk/pci.o 00:02:44.714 CC lib/env_dpdk/threads.o 00:02:44.714 CC lib/idxd/idxd.o 00:02:44.714 CC lib/conf/conf.o 00:02:44.714 CC lib/idxd/idxd_user.o 00:02:44.714 CC lib/env_dpdk/pci_ioat.o 00:02:44.714 CC lib/env_dpdk/pci_virtio.o 00:02:44.714 CC lib/idxd/idxd_kernel.o 00:02:44.714 CC lib/env_dpdk/pci_idxd.o 00:02:44.714 CC lib/env_dpdk/pci_vmd.o 00:02:44.714 CC lib/env_dpdk/pci_event.o 00:02:44.714 CC lib/rdma/common.o 00:02:44.714 CC lib/env_dpdk/sigbus_handler.o 00:02:44.714 CC lib/rdma/rdma_verbs.o 00:02:44.714 CC lib/env_dpdk/pci_dpdk.o 00:02:44.715 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:44.715 CC lib/json/json_parse.o 00:02:44.715 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:44.715 CC lib/json/json_write.o 00:02:44.715 CC lib/json/json_util.o 00:02:44.972 LIB libspdk_conf.a 00:02:44.972 LIB libspdk_json.a 00:02:44.972 LIB libspdk_rdma.a 00:02:45.230 LIB libspdk_idxd.a 00:02:45.230 LIB libspdk_vmd.a 00:02:45.488 CC lib/jsonrpc/jsonrpc_server.o 00:02:45.488 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:45.488 CC lib/jsonrpc/jsonrpc_client.o 00:02:45.488 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:45.488 LIB libspdk_jsonrpc.a 00:02:46.054 CC lib/rpc/rpc.o 00:02:46.054 LIB libspdk_env_dpdk.a 00:02:46.054 LIB libspdk_rpc.a 00:02:46.620 CC lib/trace/trace.o 00:02:46.620 CC lib/trace/trace_flags.o 00:02:46.620 CC lib/trace/trace_rpc.o 00:02:46.620 CC lib/notify/notify.o 00:02:46.620 CC lib/notify/notify_rpc.o 00:02:46.620 CC lib/keyring/keyring.o 00:02:46.620 CC lib/keyring/keyring_rpc.o 00:02:46.620 LIB libspdk_notify.a 00:02:46.620 LIB libspdk_trace.a 00:02:46.620 LIB libspdk_keyring.a 00:02:46.878 CC lib/sock/sock.o 00:02:46.878 CC lib/sock/sock_rpc.o 00:02:47.136 CC lib/thread/thread.o 00:02:47.136 CC lib/thread/iobuf.o 00:02:47.394 LIB libspdk_sock.a 00:02:47.652 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:47.652 CC lib/nvme/nvme_ctrlr.o 00:02:47.652 CC lib/nvme/nvme_fabric.o 00:02:47.652 CC lib/nvme/nvme_ns_cmd.o 00:02:47.652 CC lib/nvme/nvme_ns.o 00:02:47.652 CC lib/nvme/nvme_pcie_common.o 00:02:47.652 CC lib/nvme/nvme_pcie.o 00:02:47.652 CC lib/nvme/nvme_qpair.o 00:02:47.652 CC lib/nvme/nvme.o 00:02:47.652 CC lib/nvme/nvme_quirks.o 00:02:47.652 CC lib/nvme/nvme_transport.o 00:02:47.652 CC lib/nvme/nvme_discovery.o 00:02:47.652 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:47.652 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:47.652 CC lib/nvme/nvme_tcp.o 00:02:47.652 CC lib/nvme/nvme_opal.o 00:02:47.652 CC lib/nvme/nvme_io_msg.o 00:02:47.652 CC lib/nvme/nvme_poll_group.o 00:02:47.652 CC lib/nvme/nvme_zns.o 00:02:47.652 CC lib/nvme/nvme_stubs.o 00:02:47.652 CC lib/nvme/nvme_auth.o 00:02:47.652 CC lib/nvme/nvme_cuse.o 00:02:47.652 CC lib/nvme/nvme_vfio_user.o 00:02:47.652 CC lib/nvme/nvme_rdma.o 00:02:47.910 LIB libspdk_thread.a 00:02:48.476 CC lib/init/subsystem.o 00:02:48.476 CC lib/init/json_config.o 00:02:48.476 CC lib/init/subsystem_rpc.o 00:02:48.476 CC lib/init/rpc.o 00:02:48.476 CC lib/blob/blobstore.o 00:02:48.476 CC lib/blob/zeroes.o 00:02:48.476 CC lib/blob/request.o 00:02:48.476 CC lib/blob/blob_bs_dev.o 00:02:48.476 CC lib/virtio/virtio.o 00:02:48.476 CC lib/accel/accel_rpc.o 00:02:48.476 CC lib/virtio/virtio_vhost_user.o 00:02:48.476 CC lib/virtio/virtio_vfio_user.o 00:02:48.476 CC lib/accel/accel.o 00:02:48.476 CC lib/accel/accel_sw.o 00:02:48.476 CC lib/virtio/virtio_pci.o 00:02:48.476 CC lib/vfu_tgt/tgt_endpoint.o 00:02:48.476 CC lib/vfu_tgt/tgt_rpc.o 00:02:48.476 LIB libspdk_init.a 00:02:48.734 LIB libspdk_virtio.a 00:02:48.734 LIB libspdk_vfu_tgt.a 00:02:48.992 CC lib/event/app.o 00:02:48.992 CC lib/event/reactor.o 00:02:48.992 CC lib/event/log_rpc.o 00:02:48.992 CC lib/event/app_rpc.o 00:02:48.992 CC lib/event/scheduler_static.o 00:02:49.250 LIB libspdk_event.a 00:02:49.250 LIB libspdk_accel.a 00:02:49.250 LIB libspdk_nvme.a 00:02:49.508 CC lib/bdev/bdev.o 00:02:49.508 CC lib/bdev/bdev_rpc.o 00:02:49.508 CC lib/bdev/bdev_zone.o 00:02:49.508 CC lib/bdev/part.o 00:02:49.508 CC lib/bdev/scsi_nvme.o 00:02:50.884 LIB libspdk_blob.a 00:02:50.884 CC lib/blobfs/blobfs.o 00:02:50.884 CC lib/blobfs/tree.o 00:02:50.884 CC lib/lvol/lvol.o 00:02:51.819 LIB libspdk_lvol.a 00:02:51.819 LIB libspdk_blobfs.a 00:02:51.819 LIB libspdk_bdev.a 00:02:52.077 CC lib/ftl/ftl_init.o 00:02:52.077 CC lib/ftl/ftl_core.o 00:02:52.077 CC lib/ftl/ftl_debug.o 00:02:52.077 CC lib/ftl/ftl_layout.o 00:02:52.077 CC lib/ftl/ftl_io.o 00:02:52.077 CC lib/ftl/ftl_sb.o 00:02:52.077 CC lib/ftl/ftl_l2p_flat.o 00:02:52.077 CC lib/ftl/ftl_l2p.o 00:02:52.077 CC lib/ftl/ftl_nv_cache.o 00:02:52.077 CC lib/ftl/ftl_band.o 00:02:52.077 CC lib/ftl/ftl_writer.o 00:02:52.077 CC lib/ftl/ftl_band_ops.o 00:02:52.077 CC lib/ftl/ftl_rq.o 00:02:52.077 CC lib/ftl/ftl_reloc.o 00:02:52.335 CC lib/ftl/ftl_p2l.o 00:02:52.335 CC lib/ftl/ftl_l2p_cache.o 00:02:52.335 CC lib/ftl/mngt/ftl_mngt.o 00:02:52.335 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:52.335 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:52.335 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:52.335 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:52.335 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:52.335 CC lib/nbd/nbd.o 00:02:52.335 CC lib/nbd/nbd_rpc.o 00:02:52.335 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:52.335 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:52.335 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:52.335 CC lib/ublk/ublk.o 00:02:52.335 CC lib/scsi/dev.o 00:02:52.335 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:52.335 CC lib/ublk/ublk_rpc.o 00:02:52.335 CC lib/scsi/lun.o 00:02:52.335 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:52.335 CC lib/scsi/port.o 00:02:52.335 CC lib/scsi/scsi.o 00:02:52.335 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:52.335 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:52.335 CC lib/scsi/scsi_bdev.o 00:02:52.335 CC lib/scsi/scsi_pr.o 00:02:52.335 CC lib/nvmf/ctrlr.o 00:02:52.335 CC lib/ftl/utils/ftl_conf.o 00:02:52.335 CC lib/scsi/scsi_rpc.o 00:02:52.335 CC lib/nvmf/ctrlr_discovery.o 00:02:52.335 CC lib/ftl/utils/ftl_md.o 00:02:52.335 CC lib/scsi/task.o 00:02:52.335 CC lib/ftl/utils/ftl_mempool.o 00:02:52.335 CC lib/nvmf/ctrlr_bdev.o 00:02:52.335 CC lib/ftl/utils/ftl_bitmap.o 00:02:52.335 CC lib/nvmf/subsystem.o 00:02:52.335 CC lib/ftl/utils/ftl_property.o 00:02:52.335 CC lib/nvmf/nvmf.o 00:02:52.335 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:52.335 CC lib/nvmf/nvmf_rpc.o 00:02:52.335 CC lib/nvmf/transport.o 00:02:52.335 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:52.335 CC lib/nvmf/tcp.o 00:02:52.335 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:52.335 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:52.335 CC lib/nvmf/mdns_server.o 00:02:52.335 CC lib/nvmf/stubs.o 00:02:52.335 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:52.335 CC lib/nvmf/vfio_user.o 00:02:52.335 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:52.335 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:52.335 CC lib/nvmf/rdma.o 00:02:52.335 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:52.335 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:52.335 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:52.335 CC lib/nvmf/auth.o 00:02:52.335 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:52.335 CC lib/ftl/base/ftl_base_dev.o 00:02:52.335 CC lib/ftl/base/ftl_base_bdev.o 00:02:52.335 CC lib/ftl/ftl_trace.o 00:02:52.593 LIB libspdk_scsi.a 00:02:52.593 LIB libspdk_nbd.a 00:02:52.851 LIB libspdk_ublk.a 00:02:53.110 CC lib/iscsi/conn.o 00:02:53.110 CC lib/vhost/vhost.o 00:02:53.110 CC lib/iscsi/init_grp.o 00:02:53.110 CC lib/iscsi/iscsi.o 00:02:53.110 CC lib/vhost/vhost_rpc.o 00:02:53.110 CC lib/iscsi/md5.o 00:02:53.110 CC lib/vhost/vhost_scsi.o 00:02:53.110 CC lib/iscsi/param.o 00:02:53.110 CC lib/vhost/vhost_blk.o 00:02:53.110 CC lib/iscsi/portal_grp.o 00:02:53.110 CC lib/vhost/rte_vhost_user.o 00:02:53.110 CC lib/iscsi/tgt_node.o 00:02:53.110 CC lib/iscsi/iscsi_subsystem.o 00:02:53.110 CC lib/iscsi/iscsi_rpc.o 00:02:53.110 CC lib/iscsi/task.o 00:02:53.110 LIB libspdk_ftl.a 00:02:53.677 LIB libspdk_vhost.a 00:02:53.935 LIB libspdk_nvmf.a 00:02:53.935 LIB libspdk_iscsi.a 00:02:54.500 CC module/env_dpdk/env_dpdk_rpc.o 00:02:54.500 CC module/vfu_device/vfu_virtio_blk.o 00:02:54.500 CC module/vfu_device/vfu_virtio.o 00:02:54.500 CC module/vfu_device/vfu_virtio_scsi.o 00:02:54.500 CC module/vfu_device/vfu_virtio_rpc.o 00:02:54.759 CC module/blob/bdev/blob_bdev.o 00:02:54.759 CC module/accel/ioat/accel_ioat.o 00:02:54.759 CC module/sock/posix/posix.o 00:02:54.759 CC module/accel/ioat/accel_ioat_rpc.o 00:02:54.759 CC module/accel/error/accel_error.o 00:02:54.759 CC module/accel/error/accel_error_rpc.o 00:02:54.759 CC module/keyring/file/keyring.o 00:02:54.759 CC module/keyring/file/keyring_rpc.o 00:02:54.759 CC module/accel/dsa/accel_dsa.o 00:02:54.759 CC module/accel/dsa/accel_dsa_rpc.o 00:02:54.759 CC module/accel/iaa/accel_iaa.o 00:02:54.759 LIB libspdk_env_dpdk_rpc.a 00:02:54.759 CC module/accel/iaa/accel_iaa_rpc.o 00:02:54.759 CC module/keyring/linux/keyring.o 00:02:54.759 CC module/keyring/linux/keyring_rpc.o 00:02:54.759 CC module/scheduler/gscheduler/gscheduler.o 00:02:54.759 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:54.759 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:54.759 LIB libspdk_keyring_file.a 00:02:54.759 LIB libspdk_keyring_linux.a 00:02:54.759 LIB libspdk_accel_error.a 00:02:54.759 LIB libspdk_scheduler_gscheduler.a 00:02:54.759 LIB libspdk_scheduler_dpdk_governor.a 00:02:54.759 LIB libspdk_accel_ioat.a 00:02:54.759 LIB libspdk_accel_iaa.a 00:02:54.759 LIB libspdk_scheduler_dynamic.a 00:02:54.759 LIB libspdk_blob_bdev.a 00:02:55.017 LIB libspdk_accel_dsa.a 00:02:55.017 LIB libspdk_vfu_device.a 00:02:55.275 LIB libspdk_sock_posix.a 00:02:55.275 CC module/bdev/malloc/bdev_malloc.o 00:02:55.275 CC module/bdev/aio/bdev_aio.o 00:02:55.275 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:55.275 CC module/bdev/split/vbdev_split.o 00:02:55.275 CC module/bdev/split/vbdev_split_rpc.o 00:02:55.275 CC module/bdev/aio/bdev_aio_rpc.o 00:02:55.275 CC module/bdev/raid/bdev_raid.o 00:02:55.275 CC module/bdev/raid/bdev_raid_rpc.o 00:02:55.275 CC module/bdev/raid/bdev_raid_sb.o 00:02:55.275 CC module/bdev/raid/raid0.o 00:02:55.275 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:55.275 CC module/bdev/raid/raid1.o 00:02:55.275 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:55.275 CC module/bdev/raid/concat.o 00:02:55.275 CC module/bdev/gpt/vbdev_gpt.o 00:02:55.275 CC module/bdev/gpt/gpt.o 00:02:55.275 CC module/bdev/iscsi/bdev_iscsi.o 00:02:55.275 CC module/bdev/null/bdev_null_rpc.o 00:02:55.275 CC module/bdev/null/bdev_null.o 00:02:55.275 CC module/bdev/lvol/vbdev_lvol.o 00:02:55.275 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:55.275 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:55.275 CC module/bdev/passthru/vbdev_passthru.o 00:02:55.275 CC module/bdev/delay/vbdev_delay.o 00:02:55.275 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:55.275 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:55.275 CC module/bdev/error/vbdev_error.o 00:02:55.275 CC module/bdev/ftl/bdev_ftl.o 00:02:55.275 CC module/bdev/error/vbdev_error_rpc.o 00:02:55.275 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:55.275 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:55.275 CC module/blobfs/bdev/blobfs_bdev.o 00:02:55.275 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:55.275 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:55.275 CC module/bdev/nvme/bdev_nvme.o 00:02:55.275 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:55.275 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:55.275 CC module/bdev/nvme/nvme_rpc.o 00:02:55.275 CC module/bdev/nvme/vbdev_opal.o 00:02:55.275 CC module/bdev/nvme/bdev_mdns_client.o 00:02:55.275 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:55.275 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:55.533 LIB libspdk_blobfs_bdev.a 00:02:55.533 LIB libspdk_bdev_split.a 00:02:55.533 LIB libspdk_bdev_null.a 00:02:55.533 LIB libspdk_bdev_gpt.a 00:02:55.533 LIB libspdk_bdev_error.a 00:02:55.533 LIB libspdk_bdev_malloc.a 00:02:55.533 LIB libspdk_bdev_ftl.a 00:02:55.533 LIB libspdk_bdev_aio.a 00:02:55.533 LIB libspdk_bdev_passthru.a 00:02:55.533 LIB libspdk_bdev_zone_block.a 00:02:55.533 LIB libspdk_bdev_iscsi.a 00:02:55.791 LIB libspdk_bdev_delay.a 00:02:55.791 LIB libspdk_bdev_lvol.a 00:02:55.791 LIB libspdk_bdev_virtio.a 00:02:55.791 LIB libspdk_bdev_raid.a 00:02:57.230 LIB libspdk_bdev_nvme.a 00:02:57.796 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:57.796 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:57.796 CC module/event/subsystems/scheduler/scheduler.o 00:02:57.796 CC module/event/subsystems/sock/sock.o 00:02:57.796 CC module/event/subsystems/vmd/vmd.o 00:02:57.796 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:57.796 CC module/event/subsystems/iobuf/iobuf.o 00:02:57.796 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:57.796 CC module/event/subsystems/keyring/keyring.o 00:02:57.796 LIB libspdk_event_vhost_blk.a 00:02:57.796 LIB libspdk_event_vfu_tgt.a 00:02:57.796 LIB libspdk_event_keyring.a 00:02:57.796 LIB libspdk_event_vmd.a 00:02:57.796 LIB libspdk_event_scheduler.a 00:02:57.796 LIB libspdk_event_sock.a 00:02:57.796 LIB libspdk_event_iobuf.a 00:02:58.054 CC module/event/subsystems/accel/accel.o 00:02:58.312 LIB libspdk_event_accel.a 00:02:58.571 CC module/event/subsystems/bdev/bdev.o 00:02:58.829 LIB libspdk_event_bdev.a 00:02:59.087 CC module/event/subsystems/ublk/ublk.o 00:02:59.087 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:59.087 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:59.087 CC module/event/subsystems/scsi/scsi.o 00:02:59.087 CC module/event/subsystems/nbd/nbd.o 00:02:59.345 LIB libspdk_event_ublk.a 00:02:59.345 LIB libspdk_event_nbd.a 00:02:59.345 LIB libspdk_event_scsi.a 00:02:59.345 LIB libspdk_event_nvmf.a 00:02:59.603 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:59.603 CC module/event/subsystems/iscsi/iscsi.o 00:02:59.861 LIB libspdk_event_vhost_scsi.a 00:02:59.861 LIB libspdk_event_iscsi.a 00:03:00.119 CC app/trace_record/trace_record.o 00:03:00.119 CC app/spdk_nvme_perf/perf.o 00:03:00.119 CXX app/trace/trace.o 00:03:00.119 TEST_HEADER include/spdk/accel.h 00:03:00.119 CC app/spdk_nvme_discover/discovery_aer.o 00:03:00.119 TEST_HEADER include/spdk/accel_module.h 00:03:00.119 TEST_HEADER include/spdk/assert.h 00:03:00.119 TEST_HEADER include/spdk/barrier.h 00:03:00.119 TEST_HEADER include/spdk/base64.h 00:03:00.119 TEST_HEADER include/spdk/bdev.h 00:03:00.119 CC app/spdk_top/spdk_top.o 00:03:00.119 TEST_HEADER include/spdk/bdev_module.h 00:03:00.119 TEST_HEADER include/spdk/bdev_zone.h 00:03:00.119 TEST_HEADER include/spdk/bit_array.h 00:03:00.119 TEST_HEADER include/spdk/bit_pool.h 00:03:00.119 CC app/spdk_nvme_identify/identify.o 00:03:00.119 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:00.119 TEST_HEADER include/spdk/blob_bdev.h 00:03:00.119 TEST_HEADER include/spdk/blobfs.h 00:03:00.119 CC test/rpc_client/rpc_client_test.o 00:03:00.119 TEST_HEADER include/spdk/blob.h 00:03:00.119 TEST_HEADER include/spdk/conf.h 00:03:00.119 TEST_HEADER include/spdk/config.h 00:03:00.119 TEST_HEADER include/spdk/cpuset.h 00:03:00.119 TEST_HEADER include/spdk/crc16.h 00:03:00.120 TEST_HEADER include/spdk/crc32.h 00:03:00.120 TEST_HEADER include/spdk/crc64.h 00:03:00.120 TEST_HEADER include/spdk/dif.h 00:03:00.120 TEST_HEADER include/spdk/dma.h 00:03:00.120 CC app/spdk_lspci/spdk_lspci.o 00:03:00.120 TEST_HEADER include/spdk/endian.h 00:03:00.120 TEST_HEADER include/spdk/event.h 00:03:00.120 TEST_HEADER include/spdk/env_dpdk.h 00:03:00.120 TEST_HEADER include/spdk/env.h 00:03:00.120 TEST_HEADER include/spdk/fd_group.h 00:03:00.120 TEST_HEADER include/spdk/fd.h 00:03:00.120 TEST_HEADER include/spdk/file.h 00:03:00.120 TEST_HEADER include/spdk/ftl.h 00:03:00.120 TEST_HEADER include/spdk/gpt_spec.h 00:03:00.120 TEST_HEADER include/spdk/hexlify.h 00:03:00.120 TEST_HEADER include/spdk/histogram_data.h 00:03:00.120 TEST_HEADER include/spdk/idxd.h 00:03:00.120 TEST_HEADER include/spdk/idxd_spec.h 00:03:00.120 TEST_HEADER include/spdk/ioat.h 00:03:00.120 TEST_HEADER include/spdk/init.h 00:03:00.120 TEST_HEADER include/spdk/ioat_spec.h 00:03:00.120 TEST_HEADER include/spdk/iscsi_spec.h 00:03:00.120 TEST_HEADER include/spdk/json.h 00:03:00.120 TEST_HEADER include/spdk/jsonrpc.h 00:03:00.120 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:00.120 TEST_HEADER include/spdk/keyring_module.h 00:03:00.120 TEST_HEADER include/spdk/keyring.h 00:03:00.120 TEST_HEADER include/spdk/likely.h 00:03:00.120 TEST_HEADER include/spdk/log.h 00:03:00.120 TEST_HEADER include/spdk/memory.h 00:03:00.120 TEST_HEADER include/spdk/lvol.h 00:03:00.120 TEST_HEADER include/spdk/mmio.h 00:03:00.120 TEST_HEADER include/spdk/notify.h 00:03:00.120 TEST_HEADER include/spdk/nvme.h 00:03:00.120 TEST_HEADER include/spdk/nbd.h 00:03:00.120 TEST_HEADER include/spdk/nvme_intel.h 00:03:00.120 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:00.120 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:00.120 TEST_HEADER include/spdk/nvme_spec.h 00:03:00.120 TEST_HEADER include/spdk/nvme_zns.h 00:03:00.120 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:00.120 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:00.120 TEST_HEADER include/spdk/nvmf.h 00:03:00.120 TEST_HEADER include/spdk/nvmf_spec.h 00:03:00.120 TEST_HEADER include/spdk/nvmf_transport.h 00:03:00.120 TEST_HEADER include/spdk/opal.h 00:03:00.120 TEST_HEADER include/spdk/pci_ids.h 00:03:00.120 TEST_HEADER include/spdk/opal_spec.h 00:03:00.120 TEST_HEADER include/spdk/queue.h 00:03:00.120 TEST_HEADER include/spdk/pipe.h 00:03:00.120 TEST_HEADER include/spdk/reduce.h 00:03:00.120 TEST_HEADER include/spdk/scheduler.h 00:03:00.120 CC app/spdk_dd/spdk_dd.o 00:03:00.120 TEST_HEADER include/spdk/rpc.h 00:03:00.120 CC app/nvmf_tgt/nvmf_main.o 00:03:00.120 TEST_HEADER include/spdk/scsi_spec.h 00:03:00.120 TEST_HEADER include/spdk/scsi.h 00:03:00.120 TEST_HEADER include/spdk/sock.h 00:03:00.120 TEST_HEADER include/spdk/string.h 00:03:00.120 TEST_HEADER include/spdk/stdinc.h 00:03:00.120 TEST_HEADER include/spdk/thread.h 00:03:00.120 TEST_HEADER include/spdk/trace.h 00:03:00.120 TEST_HEADER include/spdk/trace_parser.h 00:03:00.120 TEST_HEADER include/spdk/tree.h 00:03:00.120 TEST_HEADER include/spdk/ublk.h 00:03:00.120 TEST_HEADER include/spdk/util.h 00:03:00.120 CC app/vhost/vhost.o 00:03:00.120 TEST_HEADER include/spdk/uuid.h 00:03:00.120 TEST_HEADER include/spdk/version.h 00:03:00.120 CC app/iscsi_tgt/iscsi_tgt.o 00:03:00.120 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:00.120 TEST_HEADER include/spdk/vhost.h 00:03:00.120 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:00.120 TEST_HEADER include/spdk/vmd.h 00:03:00.120 TEST_HEADER include/spdk/xor.h 00:03:00.120 TEST_HEADER include/spdk/zipf.h 00:03:00.120 CXX test/cpp_headers/accel.o 00:03:00.120 CXX test/cpp_headers/accel_module.o 00:03:00.386 CXX test/cpp_headers/assert.o 00:03:00.386 CXX test/cpp_headers/barrier.o 00:03:00.386 CXX test/cpp_headers/base64.o 00:03:00.386 CXX test/cpp_headers/bdev.o 00:03:00.386 CXX test/cpp_headers/bdev_module.o 00:03:00.386 CXX test/cpp_headers/bit_array.o 00:03:00.386 CXX test/cpp_headers/bit_pool.o 00:03:00.386 CXX test/cpp_headers/bdev_zone.o 00:03:00.386 CXX test/cpp_headers/blob_bdev.o 00:03:00.386 CXX test/cpp_headers/blobfs_bdev.o 00:03:00.386 CXX test/cpp_headers/blobfs.o 00:03:00.386 CXX test/cpp_headers/conf.o 00:03:00.386 CXX test/cpp_headers/blob.o 00:03:00.386 CXX test/cpp_headers/config.o 00:03:00.386 CXX test/cpp_headers/crc16.o 00:03:00.386 CXX test/cpp_headers/cpuset.o 00:03:00.386 CC app/spdk_tgt/spdk_tgt.o 00:03:00.386 CXX test/cpp_headers/crc32.o 00:03:00.386 CXX test/cpp_headers/crc64.o 00:03:00.386 CXX test/cpp_headers/dif.o 00:03:00.386 CXX test/cpp_headers/dma.o 00:03:00.386 CXX test/cpp_headers/endian.o 00:03:00.386 CXX test/cpp_headers/env_dpdk.o 00:03:00.386 CXX test/cpp_headers/env.o 00:03:00.386 CC examples/accel/perf/accel_perf.o 00:03:00.386 CXX test/cpp_headers/event.o 00:03:00.386 CXX test/cpp_headers/fd_group.o 00:03:00.386 CXX test/cpp_headers/fd.o 00:03:00.386 CXX test/cpp_headers/file.o 00:03:00.386 CXX test/cpp_headers/ftl.o 00:03:00.386 CXX test/cpp_headers/gpt_spec.o 00:03:00.386 CXX test/cpp_headers/hexlify.o 00:03:00.386 CXX test/cpp_headers/histogram_data.o 00:03:00.386 CXX test/cpp_headers/idxd.o 00:03:00.386 CXX test/cpp_headers/init.o 00:03:00.386 CXX test/cpp_headers/idxd_spec.o 00:03:00.386 CC examples/nvme/reconnect/reconnect.o 00:03:00.386 CC examples/ioat/verify/verify.o 00:03:00.386 CC examples/ioat/perf/perf.o 00:03:00.386 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:00.386 CC examples/util/zipf/zipf.o 00:03:00.386 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:00.386 CC examples/sock/hello_world/hello_sock.o 00:03:00.386 CC examples/nvme/hotplug/hotplug.o 00:03:00.386 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:00.386 CC examples/nvme/abort/abort.o 00:03:00.386 CC examples/nvme/arbitration/arbitration.o 00:03:00.386 CC test/event/reactor_perf/reactor_perf.o 00:03:00.386 CC examples/nvme/hello_world/hello_world.o 00:03:00.386 CC examples/idxd/perf/perf.o 00:03:00.386 CC examples/vmd/led/led.o 00:03:00.386 CC examples/vmd/lsvmd/lsvmd.o 00:03:00.386 CC test/nvme/sgl/sgl.o 00:03:00.386 CC test/env/pci/pci_ut.o 00:03:00.386 CC test/event/event_perf/event_perf.o 00:03:00.386 CC test/event/reactor/reactor.o 00:03:00.386 CC test/nvme/aer/aer.o 00:03:00.386 CC test/nvme/fused_ordering/fused_ordering.o 00:03:00.386 CC test/thread/poller_perf/poller_perf.o 00:03:00.386 CC test/nvme/startup/startup.o 00:03:00.386 CC test/nvme/reserve/reserve.o 00:03:00.386 CC test/app/stub/stub.o 00:03:00.386 CC test/app/jsoncat/jsoncat.o 00:03:00.386 CC test/app/histogram_perf/histogram_perf.o 00:03:00.386 CC test/nvme/err_injection/err_injection.o 00:03:00.386 CC app/fio/nvme/fio_plugin.o 00:03:00.386 CC test/nvme/simple_copy/simple_copy.o 00:03:00.386 CC test/nvme/cuse/cuse.o 00:03:00.386 CC test/nvme/boot_partition/boot_partition.o 00:03:00.386 CC test/nvme/connect_stress/connect_stress.o 00:03:00.386 CXX test/cpp_headers/ioat.o 00:03:00.386 CC test/nvme/reset/reset.o 00:03:00.386 CC test/nvme/e2edp/nvme_dp.o 00:03:00.386 CC test/nvme/fdp/fdp.o 00:03:00.386 CC test/env/memory/memory_ut.o 00:03:00.386 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:00.386 CC test/nvme/overhead/overhead.o 00:03:00.386 CC test/event/app_repeat/app_repeat.o 00:03:00.386 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:00.386 CC examples/nvmf/nvmf/nvmf.o 00:03:00.387 CC test/nvme/compliance/nvme_compliance.o 00:03:00.387 CC test/env/vtophys/vtophys.o 00:03:00.387 CC examples/blob/hello_world/hello_blob.o 00:03:00.387 CC examples/blob/cli/blobcli.o 00:03:00.387 CC test/thread/lock/spdk_lock.o 00:03:00.387 CC examples/bdev/hello_world/hello_bdev.o 00:03:00.387 CC examples/bdev/bdevperf/bdevperf.o 00:03:00.387 CC examples/thread/thread/thread_ex.o 00:03:00.387 LINK spdk_lspci 00:03:00.387 CC app/fio/bdev/fio_plugin.o 00:03:00.387 CC test/dma/test_dma/test_dma.o 00:03:00.387 CC test/accel/dif/dif.o 00:03:00.387 CC test/event/scheduler/scheduler.o 00:03:00.387 CC test/app/bdev_svc/bdev_svc.o 00:03:00.387 CC test/bdev/bdevio/bdevio.o 00:03:00.387 CC test/blobfs/mkfs/mkfs.o 00:03:00.387 LINK spdk_nvme_discover 00:03:00.387 CC test/lvol/esnap/esnap.o 00:03:00.387 CC test/env/mem_callbacks/mem_callbacks.o 00:03:00.387 LINK rpc_client_test 00:03:00.387 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:00.387 CXX test/cpp_headers/ioat_spec.o 00:03:00.647 LINK spdk_trace_record 00:03:00.647 LINK interrupt_tgt 00:03:00.647 LINK vtophys 00:03:00.647 CXX test/cpp_headers/iscsi_spec.o 00:03:00.647 CXX test/cpp_headers/json.o 00:03:00.647 LINK lsvmd 00:03:00.647 LINK led 00:03:00.647 CXX test/cpp_headers/jsonrpc.o 00:03:00.647 CXX test/cpp_headers/keyring.o 00:03:00.647 LINK zipf 00:03:00.647 CXX test/cpp_headers/keyring_module.o 00:03:00.647 CXX test/cpp_headers/likely.o 00:03:00.647 CXX test/cpp_headers/log.o 00:03:00.647 LINK nvmf_tgt 00:03:00.647 LINK reactor_perf 00:03:00.647 CXX test/cpp_headers/lvol.o 00:03:00.647 CXX test/cpp_headers/memory.o 00:03:00.647 CXX test/cpp_headers/mmio.o 00:03:00.647 CXX test/cpp_headers/nbd.o 00:03:00.647 LINK app_repeat 00:03:00.647 CXX test/cpp_headers/notify.o 00:03:00.647 LINK vhost 00:03:00.647 CXX test/cpp_headers/nvme.o 00:03:00.647 CXX test/cpp_headers/nvme_intel.o 00:03:00.647 CXX test/cpp_headers/nvme_ocssd.o 00:03:00.647 LINK reactor 00:03:00.647 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:00.647 CXX test/cpp_headers/nvme_spec.o 00:03:00.647 CXX test/cpp_headers/nvme_zns.o 00:03:00.647 CXX test/cpp_headers/nvmf_cmd.o 00:03:00.647 LINK jsoncat 00:03:00.647 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:00.647 LINK event_perf 00:03:00.647 CXX test/cpp_headers/nvmf.o 00:03:00.648 CXX test/cpp_headers/nvmf_spec.o 00:03:00.648 CXX test/cpp_headers/nvmf_transport.o 00:03:00.648 LINK poller_perf 00:03:00.648 CXX test/cpp_headers/opal.o 00:03:00.648 LINK histogram_perf 00:03:00.648 CXX test/cpp_headers/opal_spec.o 00:03:00.648 CXX test/cpp_headers/pci_ids.o 00:03:00.648 CXX test/cpp_headers/pipe.o 00:03:00.648 CXX test/cpp_headers/queue.o 00:03:00.648 CXX test/cpp_headers/reduce.o 00:03:00.648 LINK pmr_persistence 00:03:00.648 LINK env_dpdk_post_init 00:03:00.648 LINK boot_partition 00:03:00.648 CXX test/cpp_headers/rpc.o 00:03:00.648 LINK startup 00:03:00.648 LINK iscsi_tgt 00:03:00.648 CXX test/cpp_headers/scheduler.o 00:03:00.648 CXX test/cpp_headers/scsi.o 00:03:00.648 LINK stub 00:03:00.648 LINK cmb_copy 00:03:00.648 LINK doorbell_aers 00:03:00.648 LINK ioat_perf 00:03:00.648 LINK reserve 00:03:00.648 LINK connect_stress 00:03:00.648 LINK verify 00:03:00.648 LINK err_injection 00:03:00.648 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:00.648 LINK hello_world 00:03:00.648 fio_plugin.c:1559:29: warning: field 'ruhs' with variable sized type 'struct spdk_nvme_fdp_ruhs' not at the end of a struct or class is a GNU extension [-Wgnu-variable-sized-type-not-at-end] 00:03:00.648 struct spdk_nvme_fdp_ruhs ruhs; 00:03:00.648 ^ 00:03:00.648 LINK hotplug 00:03:00.648 LINK fused_ordering 00:03:00.648 CXX test/cpp_headers/scsi_spec.o 00:03:00.648 LINK spdk_tgt 00:03:00.648 LINK hello_sock 00:03:00.648 LINK scheduler 00:03:00.648 LINK bdev_svc 00:03:00.648 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:00.648 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:00.648 LINK simple_copy 00:03:00.648 LINK hello_blob 00:03:00.648 LINK hello_bdev 00:03:00.648 LINK reset 00:03:00.915 LINK idxd_perf 00:03:00.915 LINK nvme_dp 00:03:00.915 CC test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.o 00:03:00.915 LINK mkfs 00:03:00.915 LINK fdp 00:03:00.915 LINK sgl 00:03:00.915 CXX test/cpp_headers/sock.o 00:03:00.915 LINK aer 00:03:00.915 CXX test/cpp_headers/stdinc.o 00:03:00.915 LINK thread 00:03:00.915 LINK overhead 00:03:00.915 CC test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.o 00:03:00.915 LINK spdk_trace 00:03:00.915 LINK nvmf 00:03:00.915 CXX test/cpp_headers/string.o 00:03:00.915 CXX test/cpp_headers/thread.o 00:03:00.915 CXX test/cpp_headers/trace.o 00:03:00.915 CXX test/cpp_headers/trace_parser.o 00:03:00.916 CXX test/cpp_headers/tree.o 00:03:00.916 CXX test/cpp_headers/ublk.o 00:03:00.916 CXX test/cpp_headers/util.o 00:03:00.916 CXX test/cpp_headers/uuid.o 00:03:00.916 CXX test/cpp_headers/version.o 00:03:00.916 CXX test/cpp_headers/vfio_user_pci.o 00:03:00.916 CXX test/cpp_headers/vfio_user_spec.o 00:03:00.916 LINK arbitration 00:03:00.916 LINK reconnect 00:03:00.916 CXX test/cpp_headers/vhost.o 00:03:00.916 CXX test/cpp_headers/vmd.o 00:03:00.916 CXX test/cpp_headers/xor.o 00:03:00.916 CXX test/cpp_headers/zipf.o 00:03:00.916 LINK abort 00:03:00.916 LINK nvme_manage 00:03:00.916 LINK spdk_dd 00:03:00.916 LINK test_dma 00:03:00.916 LINK dif 00:03:00.916 LINK pci_ut 00:03:00.916 LINK bdevio 00:03:01.176 LINK accel_perf 00:03:01.176 LINK nvme_compliance 00:03:01.176 LINK nvme_fuzz 00:03:01.176 LINK blobcli 00:03:01.176 LINK spdk_nvme_identify 00:03:01.176 1 warning generated. 00:03:01.176 LINK llvm_vfio_fuzz 00:03:01.176 LINK mem_callbacks 00:03:01.176 LINK spdk_bdev 00:03:01.435 LINK spdk_nvme 00:03:01.435 LINK bdevperf 00:03:01.435 LINK vhost_fuzz 00:03:01.435 LINK spdk_nvme_perf 00:03:01.435 LINK spdk_top 00:03:01.693 LINK memory_ut 00:03:01.693 LINK llvm_nvme_fuzz 00:03:01.952 LINK spdk_lock 00:03:02.211 LINK cuse 00:03:02.211 LINK iscsi_fuzz 00:03:05.499 LINK esnap 00:03:06.068 00:03:06.068 real 0m49.122s 00:03:06.068 user 7m23.185s 00:03:06.068 sys 3m3.704s 00:03:06.068 22:53:58 make -- common/autotest_common.sh@1125 -- $ xtrace_disable 00:03:06.068 22:53:58 make -- common/autotest_common.sh@10 -- $ set +x 00:03:06.068 ************************************ 00:03:06.068 END TEST make 00:03:06.068 ************************************ 00:03:06.068 22:53:58 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:06.068 22:53:58 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:06.068 22:53:58 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:06.068 22:53:58 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:06.068 22:53:58 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:06.068 22:53:58 -- pm/common@44 -- $ pid=4008622 00:03:06.068 22:53:58 -- pm/common@50 -- $ kill -TERM 4008622 00:03:06.068 22:53:58 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:06.068 22:53:58 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:06.068 22:53:58 -- pm/common@44 -- $ pid=4008623 00:03:06.068 22:53:58 -- pm/common@50 -- $ kill -TERM 4008623 00:03:06.068 22:53:58 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:06.068 22:53:58 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:06.068 22:53:58 -- pm/common@44 -- $ pid=4008625 00:03:06.068 22:53:58 -- pm/common@50 -- $ kill -TERM 4008625 00:03:06.068 22:53:58 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:06.068 22:53:58 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:06.068 22:53:58 -- pm/common@44 -- $ pid=4008649 00:03:06.068 22:53:58 -- pm/common@50 -- $ sudo -E kill -TERM 4008649 00:03:06.068 22:53:58 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:03:06.068 22:53:58 -- nvmf/common.sh@7 -- # uname -s 00:03:06.068 22:53:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:06.068 22:53:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:06.068 22:53:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:06.068 22:53:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:06.068 22:53:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:06.068 22:53:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:06.068 22:53:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:06.068 22:53:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:06.068 22:53:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:06.068 22:53:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:06.068 22:53:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:03:06.068 22:53:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:03:06.068 22:53:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:06.068 22:53:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:06.068 22:53:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:06.068 22:53:58 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:06.068 22:53:58 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:03:06.068 22:53:58 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:06.068 22:53:58 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:06.068 22:53:58 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:06.068 22:53:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:06.068 22:53:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:06.068 22:53:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:06.068 22:53:58 -- paths/export.sh@5 -- # export PATH 00:03:06.068 22:53:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:06.068 22:53:58 -- nvmf/common.sh@47 -- # : 0 00:03:06.068 22:53:58 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:06.068 22:53:58 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:06.068 22:53:58 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:06.068 22:53:58 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:06.068 22:53:58 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:06.068 22:53:58 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:06.068 22:53:58 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:06.068 22:53:58 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:06.068 22:53:58 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:06.068 22:53:58 -- spdk/autotest.sh@32 -- # uname -s 00:03:06.068 22:53:58 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:06.068 22:53:58 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:06.068 22:53:58 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:03:06.068 22:53:58 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:06.068 22:53:58 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:03:06.068 22:53:58 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:06.068 22:53:58 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:06.068 22:53:58 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:06.068 22:53:58 -- spdk/autotest.sh@48 -- # udevadm_pid=4071965 00:03:06.068 22:53:58 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:06.068 22:53:58 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:06.068 22:53:58 -- pm/common@17 -- # local monitor 00:03:06.068 22:53:58 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:06.068 22:53:58 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:06.068 22:53:58 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:06.068 22:53:58 -- pm/common@21 -- # date +%s 00:03:06.068 22:53:58 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:06.068 22:53:58 -- pm/common@21 -- # date +%s 00:03:06.068 22:53:58 -- pm/common@25 -- # sleep 1 00:03:06.068 22:53:58 -- pm/common@21 -- # date +%s 00:03:06.068 22:53:58 -- pm/common@21 -- # date +%s 00:03:06.068 22:53:58 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1717793638 00:03:06.068 22:53:58 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1717793638 00:03:06.068 22:53:58 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1717793638 00:03:06.068 22:53:58 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1717793638 00:03:06.068 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1717793638_collect-vmstat.pm.log 00:03:06.068 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1717793638_collect-cpu-temp.pm.log 00:03:06.068 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1717793638_collect-cpu-load.pm.log 00:03:06.328 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1717793638_collect-bmc-pm.bmc.pm.log 00:03:07.266 22:53:59 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:07.266 22:53:59 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:07.266 22:53:59 -- common/autotest_common.sh@723 -- # xtrace_disable 00:03:07.266 22:53:59 -- common/autotest_common.sh@10 -- # set +x 00:03:07.266 22:53:59 -- spdk/autotest.sh@59 -- # create_test_list 00:03:07.266 22:53:59 -- common/autotest_common.sh@747 -- # xtrace_disable 00:03:07.266 22:53:59 -- common/autotest_common.sh@10 -- # set +x 00:03:07.266 22:53:59 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/autotest.sh 00:03:07.266 22:53:59 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:03:07.266 22:53:59 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:03:07.266 22:53:59 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:03:07.266 22:53:59 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:03:07.266 22:53:59 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:07.266 22:53:59 -- common/autotest_common.sh@1454 -- # uname 00:03:07.266 22:53:59 -- common/autotest_common.sh@1454 -- # '[' Linux = FreeBSD ']' 00:03:07.266 22:53:59 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:07.266 22:53:59 -- common/autotest_common.sh@1474 -- # uname 00:03:07.266 22:53:59 -- common/autotest_common.sh@1474 -- # [[ Linux = FreeBSD ]] 00:03:07.266 22:53:59 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:07.266 22:53:59 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=clang 00:03:07.266 22:53:59 -- spdk/autotest.sh@72 -- # hash lcov 00:03:07.266 22:53:59 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=clang == *\c\l\a\n\g* ]] 00:03:07.266 22:53:59 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:07.266 22:53:59 -- common/autotest_common.sh@723 -- # xtrace_disable 00:03:07.266 22:53:59 -- common/autotest_common.sh@10 -- # set +x 00:03:07.266 22:53:59 -- spdk/autotest.sh@91 -- # rm -f 00:03:07.266 22:53:59 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:11.460 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:11.460 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:11.460 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:11.460 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:11.460 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:11.460 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:11.460 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:11.460 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:11.460 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:11.460 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:11.719 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:11.719 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:11.719 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:11.719 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:11.719 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:11.719 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:11.719 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:03:11.719 22:54:03 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:11.719 22:54:03 -- common/autotest_common.sh@1668 -- # zoned_devs=() 00:03:11.719 22:54:03 -- common/autotest_common.sh@1668 -- # local -gA zoned_devs 00:03:11.719 22:54:03 -- common/autotest_common.sh@1669 -- # local nvme bdf 00:03:11.719 22:54:03 -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:03:11.719 22:54:03 -- common/autotest_common.sh@1672 -- # is_block_zoned nvme0n1 00:03:11.719 22:54:03 -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:03:11.719 22:54:03 -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:11.719 22:54:03 -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:03:11.719 22:54:03 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:11.719 22:54:03 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:11.719 22:54:03 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:11.719 22:54:03 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:11.719 22:54:03 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:11.719 22:54:03 -- scripts/common.sh@387 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:11.997 No valid GPT data, bailing 00:03:11.997 22:54:04 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:11.997 22:54:04 -- scripts/common.sh@391 -- # pt= 00:03:11.997 22:54:04 -- scripts/common.sh@392 -- # return 1 00:03:11.997 22:54:04 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:11.997 1+0 records in 00:03:11.997 1+0 records out 00:03:11.997 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00524709 s, 200 MB/s 00:03:11.997 22:54:04 -- spdk/autotest.sh@118 -- # sync 00:03:11.997 22:54:04 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:11.997 22:54:04 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:11.997 22:54:04 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:18.563 22:54:10 -- spdk/autotest.sh@124 -- # uname -s 00:03:18.822 22:54:10 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:18.822 22:54:10 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:03:18.822 22:54:10 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:18.822 22:54:10 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:18.822 22:54:10 -- common/autotest_common.sh@10 -- # set +x 00:03:18.822 ************************************ 00:03:18.822 START TEST setup.sh 00:03:18.822 ************************************ 00:03:18.822 22:54:10 setup.sh -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:03:18.822 * Looking for test storage... 00:03:18.822 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:18.822 22:54:10 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:18.822 22:54:11 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:18.822 22:54:11 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:03:18.822 22:54:11 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:18.822 22:54:11 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:18.822 22:54:11 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:18.822 ************************************ 00:03:18.822 START TEST acl 00:03:18.822 ************************************ 00:03:18.822 22:54:11 setup.sh.acl -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:03:19.082 * Looking for test storage... 00:03:19.082 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:19.082 22:54:11 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:19.082 22:54:11 setup.sh.acl -- common/autotest_common.sh@1668 -- # zoned_devs=() 00:03:19.082 22:54:11 setup.sh.acl -- common/autotest_common.sh@1668 -- # local -gA zoned_devs 00:03:19.082 22:54:11 setup.sh.acl -- common/autotest_common.sh@1669 -- # local nvme bdf 00:03:19.082 22:54:11 setup.sh.acl -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:03:19.082 22:54:11 setup.sh.acl -- common/autotest_common.sh@1672 -- # is_block_zoned nvme0n1 00:03:19.082 22:54:11 setup.sh.acl -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:03:19.082 22:54:11 setup.sh.acl -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:19.082 22:54:11 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:03:19.082 22:54:11 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:19.082 22:54:11 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:19.082 22:54:11 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:19.082 22:54:11 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:19.082 22:54:11 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:19.082 22:54:11 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:19.082 22:54:11 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:24.359 22:54:15 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:24.359 22:54:15 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:24.359 22:54:15 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.359 22:54:15 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:24.359 22:54:15 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:24.359 22:54:15 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:03:27.650 Hugepages 00:03:27.650 node hugesize free / total 00:03:27.650 22:54:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:27.650 22:54:19 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:27.650 22:54:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:27.650 22:54:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:27.650 22:54:19 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:27.650 22:54:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:27.650 22:54:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:27.650 22:54:19 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:27.650 22:54:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:27.650 00:03:27.650 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:27.650 22:54:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:27.650 22:54:19 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:27.650 22:54:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:27.650 22:54:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:27.650 22:54:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:27.650 22:54:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:27.650 22:54:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:27.650 22:54:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:27.650 22:54:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:27.650 22:54:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:27.650 22:54:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:27.650 22:54:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:27.650 22:54:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:27.650 22:54:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:27.650 22:54:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:27.650 22:54:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:27.650 22:54:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:27.651 22:54:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:27.651 22:54:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:27.651 22:54:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:27.651 22:54:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:27.651 22:54:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:27.651 22:54:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:27.651 22:54:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:27.651 22:54:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:27.651 22:54:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:27.651 22:54:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:27.651 22:54:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:27.651 22:54:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:27.651 22:54:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:27.651 22:54:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:27.651 22:54:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:27.651 22:54:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:27.651 22:54:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:27.651 22:54:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:27.651 22:54:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:27.651 22:54:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:27.651 22:54:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:27.651 22:54:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:27.651 22:54:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:27.651 22:54:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:27.651 22:54:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:27.651 22:54:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:27.651 22:54:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:27.651 22:54:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:27.651 22:54:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:27.651 22:54:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:27.651 22:54:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:27.651 22:54:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:27.651 22:54:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:27.651 22:54:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:27.651 22:54:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:27.651 22:54:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:27.651 22:54:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:27.651 22:54:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:27.651 22:54:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:27.651 22:54:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:27.651 22:54:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:27.651 22:54:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:27.651 22:54:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:27.651 22:54:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:27.651 22:54:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:27.651 22:54:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:27.651 22:54:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:27.651 22:54:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:27.651 22:54:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:27.651 22:54:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:27.911 22:54:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:d8:00.0 == *:*:*.* ]] 00:03:27.911 22:54:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:27.911 22:54:19 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:03:27.911 22:54:19 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:27.911 22:54:19 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:27.911 22:54:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:27.911 22:54:19 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:27.911 22:54:19 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:27.911 22:54:19 setup.sh.acl -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:27.911 22:54:19 setup.sh.acl -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:27.911 22:54:19 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:27.911 ************************************ 00:03:27.911 START TEST denied 00:03:27.911 ************************************ 00:03:27.911 22:54:19 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # denied 00:03:27.911 22:54:19 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:d8:00.0' 00:03:27.911 22:54:19 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:27.911 22:54:19 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:d8:00.0' 00:03:27.911 22:54:19 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:27.911 22:54:19 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:32.107 0000:d8:00.0 (8086 0a54): Skipping denied controller at 0000:d8:00.0 00:03:32.107 22:54:24 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:d8:00.0 00:03:32.107 22:54:24 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:32.107 22:54:24 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:32.107 22:54:24 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:d8:00.0 ]] 00:03:32.107 22:54:24 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:d8:00.0/driver 00:03:32.107 22:54:24 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:32.107 22:54:24 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:32.107 22:54:24 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:32.107 22:54:24 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:32.107 22:54:24 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:37.416 00:03:37.416 real 0m9.640s 00:03:37.416 user 0m2.991s 00:03:37.416 sys 0m5.834s 00:03:37.416 22:54:29 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:37.416 22:54:29 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:37.416 ************************************ 00:03:37.416 END TEST denied 00:03:37.416 ************************************ 00:03:37.416 22:54:29 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:37.416 22:54:29 setup.sh.acl -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:37.416 22:54:29 setup.sh.acl -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:37.416 22:54:29 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:37.675 ************************************ 00:03:37.675 START TEST allowed 00:03:37.675 ************************************ 00:03:37.675 22:54:29 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # allowed 00:03:37.675 22:54:29 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:d8:00.0 00:03:37.675 22:54:29 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:37.675 22:54:29 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:d8:00.0 .*: nvme -> .*' 00:03:37.675 22:54:29 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:37.675 22:54:29 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:42.948 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:03:42.948 22:54:34 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:42.948 22:54:34 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:42.948 22:54:34 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:42.948 22:54:34 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:42.948 22:54:34 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:48.228 00:03:48.228 real 0m9.775s 00:03:48.228 user 0m2.670s 00:03:48.228 sys 0m5.554s 00:03:48.228 22:54:39 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:48.229 22:54:39 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:48.229 ************************************ 00:03:48.229 END TEST allowed 00:03:48.229 ************************************ 00:03:48.229 00:03:48.229 real 0m28.490s 00:03:48.229 user 0m8.755s 00:03:48.229 sys 0m17.685s 00:03:48.229 22:54:39 setup.sh.acl -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:48.229 22:54:39 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:48.229 ************************************ 00:03:48.229 END TEST acl 00:03:48.229 ************************************ 00:03:48.229 22:54:39 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:03:48.229 22:54:39 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:48.229 22:54:39 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:48.229 22:54:39 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:48.229 ************************************ 00:03:48.229 START TEST hugepages 00:03:48.229 ************************************ 00:03:48.229 22:54:39 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:03:48.229 * Looking for test storage... 00:03:48.229 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:48.229 22:54:39 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:48.229 22:54:39 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:48.229 22:54:39 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:48.229 22:54:39 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:48.229 22:54:39 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:48.229 22:54:39 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:48.229 22:54:39 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:48.229 22:54:39 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:48.229 22:54:39 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:48.229 22:54:39 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:48.229 22:54:39 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.229 22:54:39 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.229 22:54:39 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.229 22:54:39 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.229 22:54:39 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.229 22:54:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.229 22:54:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.229 22:54:39 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 36511820 kB' 'MemAvailable: 38901792 kB' 'Buffers: 2704 kB' 'Cached: 14880544 kB' 'SwapCached: 292 kB' 'Active: 12108852 kB' 'Inactive: 3386636 kB' 'Active(anon): 11662072 kB' 'Inactive(anon): 1026052 kB' 'Active(file): 446780 kB' 'Inactive(file): 2360584 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8277500 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 615272 kB' 'Mapped: 165748 kB' 'Shmem: 12075884 kB' 'KReclaimable: 491884 kB' 'Slab: 1158116 kB' 'SReclaimable: 491884 kB' 'SUnreclaim: 666232 kB' 'KernelStack: 22416 kB' 'PageTables: 8448 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36439048 kB' 'Committed_AS: 14273440 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219640 kB' 'VmallocChunk: 0 kB' 'Percpu: 91840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 4171124 kB' 'DirectMap2M: 57380864 kB' 'DirectMap1G: 7340032 kB' 00:03:48.229 22:54:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.229 22:54:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.229 22:54:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.229 22:54:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.229 22:54:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.229 22:54:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.229 22:54:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.229 22:54:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.229 22:54:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.229 22:54:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.229 22:54:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.229 22:54:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.229 22:54:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.229 22:54:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.229 22:54:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.229 22:54:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.229 22:54:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.229 22:54:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.229 22:54:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.229 22:54:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.229 22:54:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.229 22:54:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.229 22:54:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.229 22:54:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.229 22:54:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.229 22:54:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.229 22:54:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.229 22:54:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.229 22:54:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.229 22:54:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.229 22:54:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.229 22:54:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.229 22:54:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.229 22:54:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.229 22:54:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.229 22:54:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.229 22:54:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.229 22:54:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.229 22:54:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.229 22:54:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.229 22:54:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.229 22:54:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.229 22:54:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.229 22:54:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.229 22:54:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.229 22:54:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.229 22:54:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.229 22:54:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.229 22:54:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.229 22:54:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.229 22:54:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.229 22:54:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.229 22:54:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.229 22:54:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.229 22:54:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.229 22:54:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.229 22:54:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.229 22:54:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.229 22:54:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.229 22:54:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.229 22:54:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.229 22:54:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.229 22:54:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.229 22:54:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.229 22:54:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.229 22:54:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.229 22:54:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.229 22:54:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.229 22:54:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.229 22:54:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.229 22:54:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.229 22:54:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.229 22:54:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.229 22:54:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.229 22:54:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.229 22:54:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.229 22:54:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.229 22:54:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.229 22:54:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.229 22:54:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.229 22:54:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.229 22:54:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.229 22:54:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.229 22:54:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.229 22:54:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:48.230 22:54:39 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:48.231 22:54:39 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:48.231 22:54:39 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:48.231 22:54:39 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:48.231 22:54:39 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:48.231 22:54:39 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:48.231 22:54:39 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:48.231 22:54:39 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:48.231 22:54:39 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:48.231 22:54:39 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:48.231 22:54:39 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:48.231 22:54:39 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:48.231 22:54:39 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:48.231 22:54:39 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:48.231 22:54:39 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:48.231 22:54:39 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:48.231 22:54:39 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:48.231 22:54:39 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:48.231 22:54:39 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:48.231 22:54:39 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:48.231 22:54:39 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:48.231 22:54:39 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:48.231 22:54:39 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:48.231 ************************************ 00:03:48.231 START TEST default_setup 00:03:48.231 ************************************ 00:03:48.231 22:54:39 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # default_setup 00:03:48.231 22:54:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:48.231 22:54:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:48.231 22:54:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:48.231 22:54:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:48.231 22:54:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:48.231 22:54:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:48.231 22:54:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:48.231 22:54:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:48.231 22:54:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:48.231 22:54:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:48.231 22:54:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:48.231 22:54:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:48.231 22:54:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:48.231 22:54:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:48.231 22:54:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:48.231 22:54:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:48.231 22:54:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:48.231 22:54:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:48.231 22:54:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:48.231 22:54:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:48.231 22:54:39 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:48.231 22:54:39 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:51.525 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:51.525 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:51.525 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:51.525 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:51.525 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:51.525 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:51.525 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:51.525 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:51.525 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:51.525 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:51.525 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:51.525 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:51.525 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:51.784 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:51.784 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:51.784 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:53.165 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:03:53.165 22:54:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:53.165 22:54:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:53.165 22:54:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:53.165 22:54:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:53.165 22:54:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:53.165 22:54:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:53.165 22:54:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:53.165 22:54:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:53.165 22:54:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:53.165 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:53.165 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:53.165 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:53.165 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:53.165 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.165 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:53.165 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:53.165 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.165 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.165 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.165 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.165 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 38691348 kB' 'MemAvailable: 41081288 kB' 'Buffers: 2704 kB' 'Cached: 14880684 kB' 'SwapCached: 292 kB' 'Active: 12127576 kB' 'Inactive: 3386636 kB' 'Active(anon): 11680796 kB' 'Inactive(anon): 1026052 kB' 'Active(file): 446780 kB' 'Inactive(file): 2360584 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8277500 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 633336 kB' 'Mapped: 166076 kB' 'Shmem: 12076024 kB' 'KReclaimable: 491820 kB' 'Slab: 1156388 kB' 'SReclaimable: 491820 kB' 'SUnreclaim: 664568 kB' 'KernelStack: 22448 kB' 'PageTables: 8644 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 14289564 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219432 kB' 'VmallocChunk: 0 kB' 'Percpu: 91840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4171124 kB' 'DirectMap2M: 57380864 kB' 'DirectMap1G: 7340032 kB' 00:03:53.165 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.165 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.165 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.165 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.166 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.167 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.167 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.167 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.167 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.167 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.167 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.167 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.167 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.167 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.167 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.167 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.167 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.167 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.167 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.167 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.167 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.167 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.167 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.167 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.167 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.167 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.167 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.167 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.167 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.167 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.167 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.167 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.167 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.167 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.167 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.167 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.167 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.167 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.167 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.167 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.167 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.167 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.167 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.167 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.167 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.167 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.167 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.167 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.167 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.167 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.167 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.167 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.167 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:53.167 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:53.167 22:54:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:53.167 22:54:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:53.167 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:53.167 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:53.167 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:53.167 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:53.167 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.167 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:53.167 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:53.167 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.167 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.167 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.167 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.167 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 38689760 kB' 'MemAvailable: 41079700 kB' 'Buffers: 2704 kB' 'Cached: 14880688 kB' 'SwapCached: 292 kB' 'Active: 12127628 kB' 'Inactive: 3386636 kB' 'Active(anon): 11680848 kB' 'Inactive(anon): 1026052 kB' 'Active(file): 446780 kB' 'Inactive(file): 2360584 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8277500 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 634000 kB' 'Mapped: 166028 kB' 'Shmem: 12076028 kB' 'KReclaimable: 491820 kB' 'Slab: 1156568 kB' 'SReclaimable: 491820 kB' 'SUnreclaim: 664748 kB' 'KernelStack: 22720 kB' 'PageTables: 9712 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 14291072 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219512 kB' 'VmallocChunk: 0 kB' 'Percpu: 91840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4171124 kB' 'DirectMap2M: 57380864 kB' 'DirectMap1G: 7340032 kB' 00:03:53.167 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.167 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.167 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.167 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.167 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.167 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.167 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.167 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.431 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.431 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.431 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.431 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.431 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.431 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.431 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.431 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.431 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.431 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.431 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.431 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.431 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.431 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.431 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.431 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.431 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.431 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.431 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.431 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.431 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.431 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.431 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.431 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.431 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.431 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.431 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.431 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.431 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.431 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.431 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.431 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.431 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.431 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.431 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.431 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.431 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.431 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.431 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.431 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.431 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.431 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.431 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.431 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.431 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.431 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.431 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.431 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.431 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.431 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.431 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.431 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.431 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.431 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.431 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.431 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.431 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.431 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.431 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.431 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.432 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.433 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.433 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.433 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.433 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.433 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.433 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.433 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.433 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.433 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.433 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.433 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.433 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.433 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.433 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:53.433 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:53.433 22:54:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:53.433 22:54:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:53.433 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:53.433 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:53.433 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:53.433 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:53.433 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.433 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:53.433 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:53.433 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.433 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.433 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.433 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.433 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 38690300 kB' 'MemAvailable: 41080240 kB' 'Buffers: 2704 kB' 'Cached: 14880704 kB' 'SwapCached: 292 kB' 'Active: 12126996 kB' 'Inactive: 3386636 kB' 'Active(anon): 11680216 kB' 'Inactive(anon): 1026052 kB' 'Active(file): 446780 kB' 'Inactive(file): 2360584 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8277500 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 633160 kB' 'Mapped: 165968 kB' 'Shmem: 12076044 kB' 'KReclaimable: 491820 kB' 'Slab: 1156520 kB' 'SReclaimable: 491820 kB' 'SUnreclaim: 664700 kB' 'KernelStack: 22512 kB' 'PageTables: 8904 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 14291092 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219448 kB' 'VmallocChunk: 0 kB' 'Percpu: 91840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4171124 kB' 'DirectMap2M: 57380864 kB' 'DirectMap1G: 7340032 kB' 00:03:53.433 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.433 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.433 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.433 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.433 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.433 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.433 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.433 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.433 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.433 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.433 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.433 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.433 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.433 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.433 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.433 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.433 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.433 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.433 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.433 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.433 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.433 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.433 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.433 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.433 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.433 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.433 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.433 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.433 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.433 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.433 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.433 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.433 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.433 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.433 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.433 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.433 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.433 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.433 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.433 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.433 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.433 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.433 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.433 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.433 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.433 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.433 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.433 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.433 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.433 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.433 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.433 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.433 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.433 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.433 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.433 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.433 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.433 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.433 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.433 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.433 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.433 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.433 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.433 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.433 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.433 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.433 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.433 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.433 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.433 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.433 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.433 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.433 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.433 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.433 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.433 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.434 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:53.435 nr_hugepages=1024 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:53.435 resv_hugepages=0 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:53.435 surplus_hugepages=0 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:53.435 anon_hugepages=0 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 38690240 kB' 'MemAvailable: 41080180 kB' 'Buffers: 2704 kB' 'Cached: 14880704 kB' 'SwapCached: 292 kB' 'Active: 12126840 kB' 'Inactive: 3386636 kB' 'Active(anon): 11680060 kB' 'Inactive(anon): 1026052 kB' 'Active(file): 446780 kB' 'Inactive(file): 2360584 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8277500 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 633004 kB' 'Mapped: 165976 kB' 'Shmem: 12076044 kB' 'KReclaimable: 491820 kB' 'Slab: 1156520 kB' 'SReclaimable: 491820 kB' 'SUnreclaim: 664700 kB' 'KernelStack: 22496 kB' 'PageTables: 8660 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 14291116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219480 kB' 'VmallocChunk: 0 kB' 'Percpu: 91840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4171124 kB' 'DirectMap2M: 57380864 kB' 'DirectMap1G: 7340032 kB' 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.435 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.436 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.436 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.436 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.436 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.436 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.436 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.436 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.436 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.436 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.436 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.436 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.436 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.436 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.436 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.436 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.436 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.436 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.436 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.436 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.436 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.436 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.436 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.436 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.436 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.436 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.436 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.436 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.436 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.436 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.436 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.436 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.436 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.436 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.436 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.436 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.436 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.436 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.436 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.436 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.436 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.436 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.436 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.436 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.436 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.436 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.436 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.436 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.436 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.436 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.436 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.436 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.436 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.436 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.436 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.436 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.436 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.436 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.436 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.436 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.436 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.436 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.436 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.436 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.436 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.436 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.436 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.436 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.436 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.436 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.436 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.436 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.436 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.436 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.436 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.436 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.436 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.436 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.436 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.436 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.436 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.436 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.436 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.436 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.436 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.436 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.436 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.436 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.436 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.436 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.436 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.436 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.436 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.436 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.436 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.436 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.436 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.436 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.436 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 21254784 kB' 'MemUsed: 11384356 kB' 'SwapCached: 284 kB' 'Active: 6860736 kB' 'Inactive: 1206700 kB' 'Active(anon): 6568824 kB' 'Inactive(anon): 1022952 kB' 'Active(file): 291912 kB' 'Inactive(file): 183748 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7647504 kB' 'Mapped: 112340 kB' 'AnonPages: 423108 kB' 'Shmem: 7171560 kB' 'KernelStack: 13608 kB' 'PageTables: 5676 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 171676 kB' 'Slab: 480400 kB' 'SReclaimable: 171676 kB' 'SUnreclaim: 308724 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.437 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.438 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.438 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.438 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.438 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.438 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.438 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.438 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.438 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.438 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.438 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.438 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.438 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.438 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.438 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.438 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.438 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.438 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.438 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.438 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.438 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.438 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.438 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.438 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.438 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.438 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.438 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.438 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.438 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.438 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.438 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.438 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.438 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.438 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.438 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.438 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.438 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.438 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.438 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.438 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.438 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.438 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.438 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.438 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.438 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.438 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.438 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.438 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.438 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.438 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.438 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.438 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.438 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.438 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.438 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.438 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.438 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.438 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.438 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.438 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.438 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.438 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.438 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.438 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.438 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.438 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.438 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.438 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.438 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.438 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.438 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.438 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:53.438 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:53.438 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:53.438 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.438 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:53.438 22:54:45 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:53.438 22:54:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:53.438 22:54:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:53.438 22:54:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:53.438 22:54:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:53.438 22:54:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:53.438 node0=1024 expecting 1024 00:03:53.438 22:54:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:53.438 00:03:53.438 real 0m5.792s 00:03:53.438 user 0m1.594s 00:03:53.438 sys 0m2.805s 00:03:53.438 22:54:45 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:53.438 22:54:45 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:53.438 ************************************ 00:03:53.438 END TEST default_setup 00:03:53.438 ************************************ 00:03:53.438 22:54:45 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:53.438 22:54:45 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:53.438 22:54:45 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:53.438 22:54:45 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:53.438 ************************************ 00:03:53.438 START TEST per_node_1G_alloc 00:03:53.438 ************************************ 00:03:53.438 22:54:45 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # per_node_1G_alloc 00:03:53.438 22:54:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:53.438 22:54:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:53.438 22:54:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:53.438 22:54:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:53.438 22:54:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:53.438 22:54:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:53.438 22:54:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:53.438 22:54:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:53.438 22:54:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:53.438 22:54:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:53.438 22:54:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:53.438 22:54:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:53.438 22:54:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:53.438 22:54:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:53.438 22:54:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:53.438 22:54:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:53.438 22:54:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:53.438 22:54:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:53.438 22:54:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:53.438 22:54:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:53.438 22:54:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:53.438 22:54:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:53.438 22:54:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:53.438 22:54:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:53.438 22:54:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:53.438 22:54:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:53.438 22:54:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:57.638 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:57.638 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:57.638 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:57.638 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:57.638 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:57.638 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:57.638 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:57.638 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:57.638 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:57.638 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:57.638 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:57.638 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:57.638 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:57.638 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:57.638 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:57.638 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:57.638 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:57.638 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:57.638 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:57.638 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:57.638 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:57.638 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:57.638 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:57.638 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:57.638 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:57.638 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:57.638 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:57.638 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:57.638 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:57.638 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:57.638 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.638 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.638 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.638 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.638 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.638 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.638 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.638 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 38673228 kB' 'MemAvailable: 41063168 kB' 'Buffers: 2704 kB' 'Cached: 14880840 kB' 'SwapCached: 292 kB' 'Active: 12125456 kB' 'Inactive: 3386636 kB' 'Active(anon): 11678676 kB' 'Inactive(anon): 1026052 kB' 'Active(file): 446780 kB' 'Inactive(file): 2360584 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8277500 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 631020 kB' 'Mapped: 165220 kB' 'Shmem: 12076180 kB' 'KReclaimable: 491820 kB' 'Slab: 1156448 kB' 'SReclaimable: 491820 kB' 'SUnreclaim: 664628 kB' 'KernelStack: 22320 kB' 'PageTables: 8600 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 14277952 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219512 kB' 'VmallocChunk: 0 kB' 'Percpu: 91840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4171124 kB' 'DirectMap2M: 57380864 kB' 'DirectMap1G: 7340032 kB' 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.639 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 38674364 kB' 'MemAvailable: 41064304 kB' 'Buffers: 2704 kB' 'Cached: 14880844 kB' 'SwapCached: 292 kB' 'Active: 12125128 kB' 'Inactive: 3386636 kB' 'Active(anon): 11678348 kB' 'Inactive(anon): 1026052 kB' 'Active(file): 446780 kB' 'Inactive(file): 2360584 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8277500 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 631200 kB' 'Mapped: 165080 kB' 'Shmem: 12076184 kB' 'KReclaimable: 491820 kB' 'Slab: 1156416 kB' 'SReclaimable: 491820 kB' 'SUnreclaim: 664596 kB' 'KernelStack: 22288 kB' 'PageTables: 8496 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 14279092 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219496 kB' 'VmallocChunk: 0 kB' 'Percpu: 91840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4171124 kB' 'DirectMap2M: 57380864 kB' 'DirectMap1G: 7340032 kB' 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.640 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.641 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.642 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.642 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.642 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.642 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.642 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.642 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.642 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.642 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.642 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.642 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.642 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.642 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.642 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.642 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.642 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.642 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.642 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.642 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.642 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.642 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.642 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.642 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.642 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.642 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.642 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.642 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.642 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.642 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.642 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.642 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.642 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.642 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.642 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.642 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.642 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.642 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.642 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.642 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.642 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:57.642 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:57.642 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:57.642 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:57.642 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:57.642 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:57.642 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:57.642 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.642 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.642 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.642 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.642 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.642 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.642 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.642 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.642 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 38674672 kB' 'MemAvailable: 41064612 kB' 'Buffers: 2704 kB' 'Cached: 14880864 kB' 'SwapCached: 292 kB' 'Active: 12125556 kB' 'Inactive: 3386636 kB' 'Active(anon): 11678776 kB' 'Inactive(anon): 1026052 kB' 'Active(file): 446780 kB' 'Inactive(file): 2360584 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8277500 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 631720 kB' 'Mapped: 165088 kB' 'Shmem: 12076204 kB' 'KReclaimable: 491820 kB' 'Slab: 1156416 kB' 'SReclaimable: 491820 kB' 'SUnreclaim: 664596 kB' 'KernelStack: 22288 kB' 'PageTables: 8520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 14279116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219496 kB' 'VmallocChunk: 0 kB' 'Percpu: 91840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4171124 kB' 'DirectMap2M: 57380864 kB' 'DirectMap1G: 7340032 kB' 00:03:57.642 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.642 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.642 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.642 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.642 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.642 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.642 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.642 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.642 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.642 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.642 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.642 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.642 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.642 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.642 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.642 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.642 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.642 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.642 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.642 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.642 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.642 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.642 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.642 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.642 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.642 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.642 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.642 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.642 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.642 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.642 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.642 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.642 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.642 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.642 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.642 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.642 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.642 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.642 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.642 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.642 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.642 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.642 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.642 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.643 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.643 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.643 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.643 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.643 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.643 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.643 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.643 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.643 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.643 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.643 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.643 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.643 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.643 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.643 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.643 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.643 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.643 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.643 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.643 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.643 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.643 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.643 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.643 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.643 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.643 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.643 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.643 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.643 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.643 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.643 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.643 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.643 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.643 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.643 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.643 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.643 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.643 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.643 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.643 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.643 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.643 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.643 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.643 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.643 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.643 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.643 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.643 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.643 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.643 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.643 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.643 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.643 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.643 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.643 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.643 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.643 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.643 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.643 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.643 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.643 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.907 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.907 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.907 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.907 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.907 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.907 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.907 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.907 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.907 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.907 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.907 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.907 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.907 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.907 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.907 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.907 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.907 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.907 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.907 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.907 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.907 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.907 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.907 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.907 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.907 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.907 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.907 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.907 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.907 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.907 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.907 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.907 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.907 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.907 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.907 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.907 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.907 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.907 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.907 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.907 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.907 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.907 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.907 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.907 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.907 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.907 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.907 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.907 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.907 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.907 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.907 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.907 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.907 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.907 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.907 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.907 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.907 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.907 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.907 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.907 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.907 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.907 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.907 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.907 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.907 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.907 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.907 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.907 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.907 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.907 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.907 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.907 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.907 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.907 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.907 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.907 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.907 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.907 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.907 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.907 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.907 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.907 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.907 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.907 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.907 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.907 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.907 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.907 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.907 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.907 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.907 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.907 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.907 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.908 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.908 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.908 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.908 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:57.908 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:57.908 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:57.908 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:57.908 nr_hugepages=1024 00:03:57.908 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:57.908 resv_hugepages=0 00:03:57.908 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:57.908 surplus_hugepages=0 00:03:57.908 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:57.908 anon_hugepages=0 00:03:57.908 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:57.908 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:57.908 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:57.908 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:57.908 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:57.908 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:57.908 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.908 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.908 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.908 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.908 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.908 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.908 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.908 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.908 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 38681168 kB' 'MemAvailable: 41071108 kB' 'Buffers: 2704 kB' 'Cached: 14880884 kB' 'SwapCached: 292 kB' 'Active: 12126256 kB' 'Inactive: 3386636 kB' 'Active(anon): 11679476 kB' 'Inactive(anon): 1026052 kB' 'Active(file): 446780 kB' 'Inactive(file): 2360584 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8277500 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 632368 kB' 'Mapped: 165088 kB' 'Shmem: 12076224 kB' 'KReclaimable: 491820 kB' 'Slab: 1156416 kB' 'SReclaimable: 491820 kB' 'SUnreclaim: 664596 kB' 'KernelStack: 22304 kB' 'PageTables: 8580 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 14279136 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219496 kB' 'VmallocChunk: 0 kB' 'Percpu: 91840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4171124 kB' 'DirectMap2M: 57380864 kB' 'DirectMap1G: 7340032 kB' 00:03:57.908 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.908 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.908 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.908 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.908 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.908 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.908 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.908 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.908 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.908 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.908 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.908 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.908 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.908 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.908 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.908 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.908 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.908 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.908 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.908 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.908 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.908 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.908 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.908 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.908 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.908 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.908 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.908 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.908 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.908 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.908 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.908 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.908 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.908 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.908 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.908 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.908 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.908 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.908 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.908 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.908 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.908 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.908 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.908 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.908 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.908 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.908 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.908 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.908 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.908 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.908 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.908 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.908 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.908 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.908 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.908 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.908 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.908 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.908 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.908 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.908 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.908 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.908 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.908 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.908 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.908 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.908 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.908 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.908 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.908 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.908 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.908 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.908 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.908 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.908 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.908 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.908 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.908 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.908 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.908 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:57.909 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 22294608 kB' 'MemUsed: 10344532 kB' 'SwapCached: 284 kB' 'Active: 6861720 kB' 'Inactive: 1206700 kB' 'Active(anon): 6569808 kB' 'Inactive(anon): 1022952 kB' 'Active(file): 291912 kB' 'Inactive(file): 183748 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7647620 kB' 'Mapped: 111480 kB' 'AnonPages: 423976 kB' 'Shmem: 7171676 kB' 'KernelStack: 13640 kB' 'PageTables: 5760 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 171676 kB' 'Slab: 480612 kB' 'SReclaimable: 171676 kB' 'SUnreclaim: 308936 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.910 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.911 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.911 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.911 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.911 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.911 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.911 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.911 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.911 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.911 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.911 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.911 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.911 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.911 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.911 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.911 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.911 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.911 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.911 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.911 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.911 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.911 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.911 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.911 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.911 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.911 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.911 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.911 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.911 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.911 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.911 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.911 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.911 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.911 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.911 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.911 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.911 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.911 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.911 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.911 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.911 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.911 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.911 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.911 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.911 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.911 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.911 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.911 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.911 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.911 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.911 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.911 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.911 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.911 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.911 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.911 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.911 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.911 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.911 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.911 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:57.911 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:57.911 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:57.911 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:57.911 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:57.911 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:57.911 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:57.911 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:57.911 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:57.911 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.911 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.911 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:57.911 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:57.911 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.911 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.911 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.911 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.911 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656056 kB' 'MemFree: 16387980 kB' 'MemUsed: 11268076 kB' 'SwapCached: 8 kB' 'Active: 5263976 kB' 'Inactive: 2179936 kB' 'Active(anon): 5109108 kB' 'Inactive(anon): 3100 kB' 'Active(file): 154868 kB' 'Inactive(file): 2176836 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7236260 kB' 'Mapped: 53608 kB' 'AnonPages: 207768 kB' 'Shmem: 4904548 kB' 'KernelStack: 8728 kB' 'PageTables: 3272 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 320144 kB' 'Slab: 675804 kB' 'SReclaimable: 320144 kB' 'SUnreclaim: 355660 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:57.911 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.911 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.911 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.911 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.911 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.911 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.911 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.911 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.911 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.911 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.911 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.911 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.912 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.913 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.913 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.913 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.913 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.913 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.913 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.913 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.913 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.913 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.913 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.913 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:57.913 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.913 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.913 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.913 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:57.913 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:57.913 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:57.913 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:57.913 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:57.913 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:57.913 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:57.913 node0=512 expecting 512 00:03:57.913 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:57.913 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:57.913 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:57.913 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:57.913 node1=512 expecting 512 00:03:57.913 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:57.913 00:03:57.913 real 0m4.350s 00:03:57.913 user 0m1.575s 00:03:57.913 sys 0m2.850s 00:03:57.913 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:57.913 22:54:49 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:57.913 ************************************ 00:03:57.913 END TEST per_node_1G_alloc 00:03:57.913 ************************************ 00:03:57.913 22:54:50 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:57.913 22:54:50 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:57.913 22:54:50 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:57.913 22:54:50 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:57.913 ************************************ 00:03:57.913 START TEST even_2G_alloc 00:03:57.913 ************************************ 00:03:57.913 22:54:50 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # even_2G_alloc 00:03:57.913 22:54:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:57.913 22:54:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:57.913 22:54:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:57.913 22:54:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:57.913 22:54:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:57.913 22:54:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:57.913 22:54:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:57.913 22:54:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:57.913 22:54:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:57.913 22:54:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:57.913 22:54:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:57.913 22:54:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:57.913 22:54:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:57.913 22:54:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:57.913 22:54:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:57.913 22:54:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:57.913 22:54:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:57.913 22:54:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:57.913 22:54:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:57.913 22:54:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:57.913 22:54:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:57.913 22:54:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:57.913 22:54:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:57.913 22:54:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:57.913 22:54:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:57.913 22:54:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:57.913 22:54:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:57.913 22:54:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:02.255 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:02.255 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:02.255 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:02.255 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:02.255 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:02.255 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:02.255 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:02.255 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:02.255 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:02.255 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:02.255 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:02.255 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:02.255 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:02.255 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:02.255 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:02.255 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:02.255 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:02.255 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:02.255 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:02.255 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:02.255 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:02.255 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:02.255 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:02.255 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:02.255 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:02.255 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:02.255 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:02.255 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:02.255 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:02.255 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.255 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.255 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.255 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.255 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.255 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.255 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.255 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.255 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 38690336 kB' 'MemAvailable: 41080276 kB' 'Buffers: 2704 kB' 'Cached: 14881036 kB' 'SwapCached: 292 kB' 'Active: 12126844 kB' 'Inactive: 3386636 kB' 'Active(anon): 11680064 kB' 'Inactive(anon): 1026052 kB' 'Active(file): 446780 kB' 'Inactive(file): 2360584 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8277500 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 632656 kB' 'Mapped: 165212 kB' 'Shmem: 12076376 kB' 'KReclaimable: 491820 kB' 'Slab: 1156152 kB' 'SReclaimable: 491820 kB' 'SUnreclaim: 664332 kB' 'KernelStack: 22336 kB' 'PageTables: 8672 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 14279100 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219592 kB' 'VmallocChunk: 0 kB' 'Percpu: 91840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4171124 kB' 'DirectMap2M: 57380864 kB' 'DirectMap1G: 7340032 kB' 00:04:02.255 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.255 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.255 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.255 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.255 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.255 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.255 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.255 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.255 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.255 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.255 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.255 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.255 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.255 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.255 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.255 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.255 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.255 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.255 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.255 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.255 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.255 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.255 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.256 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.257 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.257 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.257 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.257 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.257 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.257 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.257 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.257 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.257 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.257 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.257 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.257 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.257 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.257 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.257 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.257 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.257 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.257 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.257 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:02.257 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:02.257 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:02.257 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:02.257 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:02.257 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:02.257 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.257 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.257 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.257 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.257 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.257 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.257 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.257 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.257 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 38690240 kB' 'MemAvailable: 41080180 kB' 'Buffers: 2704 kB' 'Cached: 14881040 kB' 'SwapCached: 292 kB' 'Active: 12126332 kB' 'Inactive: 3386636 kB' 'Active(anon): 11679552 kB' 'Inactive(anon): 1026052 kB' 'Active(file): 446780 kB' 'Inactive(file): 2360584 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8277500 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 632144 kB' 'Mapped: 165092 kB' 'Shmem: 12076380 kB' 'KReclaimable: 491820 kB' 'Slab: 1156128 kB' 'SReclaimable: 491820 kB' 'SUnreclaim: 664308 kB' 'KernelStack: 22304 kB' 'PageTables: 8552 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 14279116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219560 kB' 'VmallocChunk: 0 kB' 'Percpu: 91840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4171124 kB' 'DirectMap2M: 57380864 kB' 'DirectMap1G: 7340032 kB' 00:04:02.257 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.257 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.257 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.257 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.257 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.257 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.257 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.257 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.257 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.257 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.257 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.257 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.257 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.257 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.257 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.257 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.257 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.257 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.257 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.257 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.257 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.257 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.257 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.257 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.257 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.257 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.257 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.257 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.257 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.257 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.257 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.257 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.257 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.257 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.257 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.257 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.257 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.257 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.257 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.257 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.257 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.257 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.257 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.257 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.257 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.257 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.257 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.257 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.257 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.257 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.257 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.257 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.257 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.257 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.257 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.257 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.257 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.257 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.257 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.257 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.257 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.257 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.257 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.257 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.257 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.257 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.257 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.257 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.257 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.257 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.258 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.259 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.259 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.259 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.259 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.259 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.259 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.259 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.259 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.259 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.259 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.259 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.259 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.259 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.259 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.259 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:02.259 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:02.259 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:02.259 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:02.259 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:02.259 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:02.259 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.259 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.259 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.259 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.259 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.259 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.259 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.259 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.259 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 38690240 kB' 'MemAvailable: 41080180 kB' 'Buffers: 2704 kB' 'Cached: 14881040 kB' 'SwapCached: 292 kB' 'Active: 12126384 kB' 'Inactive: 3386636 kB' 'Active(anon): 11679604 kB' 'Inactive(anon): 1026052 kB' 'Active(file): 446780 kB' 'Inactive(file): 2360584 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8277500 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 632172 kB' 'Mapped: 165092 kB' 'Shmem: 12076380 kB' 'KReclaimable: 491820 kB' 'Slab: 1156128 kB' 'SReclaimable: 491820 kB' 'SUnreclaim: 664308 kB' 'KernelStack: 22320 kB' 'PageTables: 8604 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 14279136 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219560 kB' 'VmallocChunk: 0 kB' 'Percpu: 91840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4171124 kB' 'DirectMap2M: 57380864 kB' 'DirectMap1G: 7340032 kB' 00:04:02.259 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.259 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.259 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.259 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.259 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.259 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.259 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.259 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.259 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.259 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.259 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.259 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.259 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.259 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.259 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.259 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.259 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.259 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.259 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.259 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.259 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.259 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.259 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.259 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.259 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.259 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.259 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.259 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.259 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.259 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.259 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.259 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.259 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.259 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.259 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.259 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.259 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.259 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.259 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.259 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.259 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.259 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.259 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.259 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.259 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.259 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.259 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.259 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.259 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.259 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.259 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.259 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.259 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.259 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.259 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.259 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.259 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.259 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.259 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.259 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.259 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.259 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.259 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.259 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.259 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.259 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.259 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.259 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.259 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.259 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.259 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.259 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.259 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.259 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.259 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.259 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.259 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.260 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.260 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.260 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.260 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.260 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.260 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.260 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.260 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.260 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.260 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.260 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.260 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.260 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.260 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.260 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.260 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.260 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.260 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.260 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.260 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.260 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.260 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.260 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.260 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.260 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.260 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.260 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.260 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.260 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.260 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.260 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.260 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.260 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.260 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.260 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.260 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.260 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.260 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.260 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.260 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.260 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.260 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.260 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.260 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.260 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.260 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.260 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.260 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.260 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.260 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.260 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.260 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.260 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.260 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.260 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.260 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.260 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.260 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.260 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.260 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.260 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.260 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.260 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.260 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.260 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.260 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.260 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.260 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.260 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.260 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.260 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.260 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.260 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.260 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.260 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.260 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.260 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.260 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.260 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.260 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.260 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.260 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.260 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.260 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.260 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.260 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.260 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.260 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.260 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.260 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.260 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.260 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.260 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.260 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.260 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.260 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.260 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.260 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.260 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.260 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.261 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.261 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.261 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.261 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.261 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.261 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.261 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.261 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.261 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.261 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.261 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.261 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.261 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.261 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.261 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.261 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.261 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.261 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.261 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.261 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.261 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.261 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.261 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.261 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.261 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.261 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:02.261 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:02.261 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:02.261 nr_hugepages=1024 00:04:02.261 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:02.261 resv_hugepages=0 00:04:02.261 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:02.261 surplus_hugepages=0 00:04:02.261 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:02.261 anon_hugepages=0 00:04:02.261 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:02.261 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:02.261 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:02.261 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:02.261 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:02.261 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:02.261 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.261 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.261 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.261 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.261 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.261 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.261 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.261 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.261 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 38690696 kB' 'MemAvailable: 41080636 kB' 'Buffers: 2704 kB' 'Cached: 14881100 kB' 'SwapCached: 292 kB' 'Active: 12126044 kB' 'Inactive: 3386636 kB' 'Active(anon): 11679264 kB' 'Inactive(anon): 1026052 kB' 'Active(file): 446780 kB' 'Inactive(file): 2360584 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8277500 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 631740 kB' 'Mapped: 165092 kB' 'Shmem: 12076440 kB' 'KReclaimable: 491820 kB' 'Slab: 1156128 kB' 'SReclaimable: 491820 kB' 'SUnreclaim: 664308 kB' 'KernelStack: 22288 kB' 'PageTables: 8496 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 14279160 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219560 kB' 'VmallocChunk: 0 kB' 'Percpu: 91840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4171124 kB' 'DirectMap2M: 57380864 kB' 'DirectMap1G: 7340032 kB' 00:04:02.261 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.261 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.261 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.261 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.261 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.261 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.261 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.261 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.261 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.261 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.261 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.261 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.261 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.261 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.261 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.261 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.261 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.261 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.261 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.261 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.261 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.261 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.261 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.261 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.261 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.261 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.261 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.261 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.261 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.261 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.261 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.261 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.261 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.261 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.261 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.261 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.261 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.261 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.261 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.261 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.261 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.261 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.261 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.261 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.261 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.261 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.261 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.261 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.261 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.261 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.262 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 22290424 kB' 'MemUsed: 10348716 kB' 'SwapCached: 284 kB' 'Active: 6862316 kB' 'Inactive: 1206700 kB' 'Active(anon): 6570404 kB' 'Inactive(anon): 1022952 kB' 'Active(file): 291912 kB' 'Inactive(file): 183748 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7647748 kB' 'Mapped: 111480 kB' 'AnonPages: 424364 kB' 'Shmem: 7171804 kB' 'KernelStack: 13592 kB' 'PageTables: 5668 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 171676 kB' 'Slab: 480320 kB' 'SReclaimable: 171676 kB' 'SUnreclaim: 308644 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.263 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.264 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.264 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.264 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.264 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.264 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.264 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.264 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.264 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.264 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.264 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.264 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.264 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.264 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.264 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.264 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.264 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.264 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.264 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.264 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.264 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.264 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.264 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.264 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.264 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.264 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.264 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.264 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.264 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.264 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.264 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.264 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.264 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.264 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.264 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.264 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.264 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.264 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.264 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.264 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.264 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.264 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.264 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.264 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.264 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.264 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.264 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.264 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.264 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.264 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.264 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.264 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.264 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.264 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.264 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.264 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.264 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.264 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.264 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.264 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.264 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.264 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.264 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.264 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.264 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.264 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.264 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.264 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.264 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.264 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.264 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.264 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.264 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.264 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.264 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.264 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.264 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.264 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.264 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.264 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.264 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.264 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.264 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.264 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.264 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.264 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.264 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.264 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.264 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.264 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.264 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.264 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:02.264 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:02.264 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:02.264 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:02.264 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:02.264 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:02.264 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:04:02.264 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:02.264 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.264 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656056 kB' 'MemFree: 16399840 kB' 'MemUsed: 11256216 kB' 'SwapCached: 8 kB' 'Active: 5264108 kB' 'Inactive: 2179936 kB' 'Active(anon): 5109240 kB' 'Inactive(anon): 3100 kB' 'Active(file): 154868 kB' 'Inactive(file): 2176836 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7236372 kB' 'Mapped: 53612 kB' 'AnonPages: 207780 kB' 'Shmem: 4904660 kB' 'KernelStack: 8712 kB' 'PageTables: 2884 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 320144 kB' 'Slab: 675808 kB' 'SReclaimable: 320144 kB' 'SUnreclaim: 355664 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.265 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.266 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.266 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.266 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.266 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.266 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.266 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.266 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.266 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.266 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.266 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.266 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.266 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.266 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.266 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.266 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.266 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.266 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.266 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.266 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.266 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.266 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.266 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.266 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.266 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.266 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.266 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.266 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.266 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.266 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.266 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.266 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.266 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.266 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.266 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.266 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.266 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.266 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.266 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.266 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.266 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:02.266 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.266 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.266 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.266 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.266 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:02.266 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:02.266 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:02.266 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:02.266 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:02.266 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:02.266 node0=512 expecting 512 00:04:02.266 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:02.266 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:02.266 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:02.266 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:02.266 node1=512 expecting 512 00:04:02.266 22:54:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:02.266 00:04:02.266 real 0m4.392s 00:04:02.266 user 0m1.605s 00:04:02.266 sys 0m2.873s 00:04:02.266 22:54:54 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:02.266 22:54:54 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:02.266 ************************************ 00:04:02.266 END TEST even_2G_alloc 00:04:02.266 ************************************ 00:04:02.266 22:54:54 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:02.266 22:54:54 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:02.266 22:54:54 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:02.266 22:54:54 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:02.525 ************************************ 00:04:02.525 START TEST odd_alloc 00:04:02.525 ************************************ 00:04:02.525 22:54:54 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # odd_alloc 00:04:02.525 22:54:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:02.525 22:54:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:02.525 22:54:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:02.525 22:54:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:02.525 22:54:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:02.525 22:54:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:02.525 22:54:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:02.525 22:54:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:02.525 22:54:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:02.525 22:54:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:02.525 22:54:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:02.525 22:54:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:02.525 22:54:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:02.525 22:54:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:02.525 22:54:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:02.525 22:54:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:02.525 22:54:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:04:02.525 22:54:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:02.526 22:54:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:02.526 22:54:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:02.526 22:54:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:02.526 22:54:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:02.526 22:54:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:02.526 22:54:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:02.526 22:54:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:02.526 22:54:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:02.526 22:54:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:02.526 22:54:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:06.725 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:06.725 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:06.725 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:06.725 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:06.725 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:06.725 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:06.725 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:06.725 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:06.725 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:06.725 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:06.725 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:06.725 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:06.725 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:06.725 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:06.725 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:06.725 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:06.725 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:06.725 22:54:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:06.725 22:54:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:06.725 22:54:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:06.725 22:54:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:06.725 22:54:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:06.725 22:54:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:06.725 22:54:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:06.725 22:54:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:06.725 22:54:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:06.725 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:06.725 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:06.725 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:06.725 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:06.725 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.725 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.725 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.725 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.725 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.726 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.726 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.726 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 38715100 kB' 'MemAvailable: 41105040 kB' 'Buffers: 2704 kB' 'Cached: 14881212 kB' 'SwapCached: 292 kB' 'Active: 12127544 kB' 'Inactive: 3386636 kB' 'Active(anon): 11680764 kB' 'Inactive(anon): 1026052 kB' 'Active(file): 446780 kB' 'Inactive(file): 2360584 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8277500 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 633236 kB' 'Mapped: 165500 kB' 'Shmem: 12076552 kB' 'KReclaimable: 491820 kB' 'Slab: 1156404 kB' 'SReclaimable: 491820 kB' 'SUnreclaim: 664584 kB' 'KernelStack: 22288 kB' 'PageTables: 8620 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486600 kB' 'Committed_AS: 14279928 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219512 kB' 'VmallocChunk: 0 kB' 'Percpu: 91840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4171124 kB' 'DirectMap2M: 57380864 kB' 'DirectMap1G: 7340032 kB' 00:04:06.726 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.726 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.726 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.726 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.726 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.726 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.726 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.726 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.726 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.726 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.726 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.726 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.726 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.726 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.726 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.726 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.726 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.726 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.726 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.726 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.726 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.726 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.726 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.726 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.726 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.726 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.726 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.726 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.726 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.726 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.726 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.726 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.726 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.726 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.726 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.726 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.726 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.726 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.726 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.726 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.726 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.726 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.726 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.726 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.726 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.726 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.726 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.726 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.726 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.726 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.726 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.726 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.726 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.726 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.726 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.726 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.726 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.726 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.726 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.726 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.726 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.726 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.726 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.726 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.726 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.726 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.726 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.726 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.726 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.726 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.726 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.726 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.726 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.726 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.726 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.726 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.726 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.726 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.726 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.726 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.726 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.726 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.726 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.726 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.726 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.726 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.726 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.726 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.726 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.726 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.726 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.726 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.726 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.726 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.726 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.726 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.726 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.726 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.726 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.726 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.726 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.726 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.726 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.727 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.727 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.727 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.727 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.727 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.727 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.727 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.727 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.727 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.727 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.727 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.727 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.727 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.727 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.727 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.727 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.727 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.727 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.727 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.727 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.727 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.727 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.727 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.727 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.727 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.727 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.727 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.727 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.727 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.727 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.727 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.727 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.727 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.727 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.727 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.727 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.727 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.727 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.727 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.727 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.727 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.727 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.727 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.727 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.727 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.727 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.727 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.727 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.727 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.727 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.727 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.727 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.727 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.727 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.727 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.727 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.727 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.727 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.727 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:06.727 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:06.727 22:54:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:06.727 22:54:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:06.727 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:06.727 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:06.727 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:06.727 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:06.727 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.727 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.727 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.727 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.727 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.727 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.727 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.727 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 38715660 kB' 'MemAvailable: 41105600 kB' 'Buffers: 2704 kB' 'Cached: 14881216 kB' 'SwapCached: 292 kB' 'Active: 12127072 kB' 'Inactive: 3386636 kB' 'Active(anon): 11680292 kB' 'Inactive(anon): 1026052 kB' 'Active(file): 446780 kB' 'Inactive(file): 2360584 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8277500 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 632772 kB' 'Mapped: 165108 kB' 'Shmem: 12076556 kB' 'KReclaimable: 491820 kB' 'Slab: 1156404 kB' 'SReclaimable: 491820 kB' 'SUnreclaim: 664584 kB' 'KernelStack: 22272 kB' 'PageTables: 8548 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486600 kB' 'Committed_AS: 14279944 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219496 kB' 'VmallocChunk: 0 kB' 'Percpu: 91840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4171124 kB' 'DirectMap2M: 57380864 kB' 'DirectMap1G: 7340032 kB' 00:04:06.727 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.727 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.727 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.727 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.727 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.727 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.727 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.727 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.727 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.727 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.727 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.727 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.727 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.727 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.727 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.727 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.727 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.727 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.727 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.727 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.727 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.727 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.727 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.727 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.727 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.727 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.727 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.727 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.727 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.727 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.727 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.727 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.728 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 38716072 kB' 'MemAvailable: 41106012 kB' 'Buffers: 2704 kB' 'Cached: 14881220 kB' 'SwapCached: 292 kB' 'Active: 12126796 kB' 'Inactive: 3386636 kB' 'Active(anon): 11680016 kB' 'Inactive(anon): 1026052 kB' 'Active(file): 446780 kB' 'Inactive(file): 2360584 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8277500 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 632532 kB' 'Mapped: 165108 kB' 'Shmem: 12076560 kB' 'KReclaimable: 491820 kB' 'Slab: 1156396 kB' 'SReclaimable: 491820 kB' 'SUnreclaim: 664576 kB' 'KernelStack: 22272 kB' 'PageTables: 8536 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486600 kB' 'Committed_AS: 14279964 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219496 kB' 'VmallocChunk: 0 kB' 'Percpu: 91840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4171124 kB' 'DirectMap2M: 57380864 kB' 'DirectMap1G: 7340032 kB' 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.729 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.730 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.730 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.730 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.730 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.730 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.730 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.730 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.730 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.730 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.730 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.730 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.730 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.730 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.730 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.730 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.730 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.730 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.730 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.730 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.730 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.730 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.730 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.730 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.730 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.730 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.730 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.730 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.730 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.730 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.730 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.730 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.730 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.730 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.730 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.730 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.730 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.730 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.730 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.730 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.730 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.730 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.730 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.730 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.730 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.730 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.730 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.730 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.730 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.730 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.730 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.730 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.730 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.730 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.730 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.730 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.730 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.730 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.730 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.730 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.730 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.730 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.730 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.730 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.730 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.730 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.730 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.730 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.730 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.730 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.730 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.730 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.730 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.730 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.730 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.730 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.730 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.730 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.730 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.730 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.730 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.730 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.730 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.730 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.730 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.730 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.730 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.730 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.730 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.730 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.730 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.730 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.730 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.730 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.730 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.730 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.730 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.730 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.730 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.730 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.730 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.730 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.730 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:06.731 nr_hugepages=1025 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:06.731 resv_hugepages=0 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:06.731 surplus_hugepages=0 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:06.731 anon_hugepages=0 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 38716184 kB' 'MemAvailable: 41106124 kB' 'Buffers: 2704 kB' 'Cached: 14881276 kB' 'SwapCached: 292 kB' 'Active: 12126828 kB' 'Inactive: 3386636 kB' 'Active(anon): 11680048 kB' 'Inactive(anon): 1026052 kB' 'Active(file): 446780 kB' 'Inactive(file): 2360584 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8277500 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 632456 kB' 'Mapped: 165108 kB' 'Shmem: 12076616 kB' 'KReclaimable: 491820 kB' 'Slab: 1156396 kB' 'SReclaimable: 491820 kB' 'SUnreclaim: 664576 kB' 'KernelStack: 22256 kB' 'PageTables: 8480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486600 kB' 'Committed_AS: 14279984 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219496 kB' 'VmallocChunk: 0 kB' 'Percpu: 91840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4171124 kB' 'DirectMap2M: 57380864 kB' 'DirectMap1G: 7340032 kB' 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.731 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.732 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 22304340 kB' 'MemUsed: 10334800 kB' 'SwapCached: 284 kB' 'Active: 6862548 kB' 'Inactive: 1206700 kB' 'Active(anon): 6570636 kB' 'Inactive(anon): 1022952 kB' 'Active(file): 291912 kB' 'Inactive(file): 183748 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7647800 kB' 'Mapped: 111480 kB' 'AnonPages: 424632 kB' 'Shmem: 7171856 kB' 'KernelStack: 13560 kB' 'PageTables: 5616 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 171676 kB' 'Slab: 480568 kB' 'SReclaimable: 171676 kB' 'SUnreclaim: 308892 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.733 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.734 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.734 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.734 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.734 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.734 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.734 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.734 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.734 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.734 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.734 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.734 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.734 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.734 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.734 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.734 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.734 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.734 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.734 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.734 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.734 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.734 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.734 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.734 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.734 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.734 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.734 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.734 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.734 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.734 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.734 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.734 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.734 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.734 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.734 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.734 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.734 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.734 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.734 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.734 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.734 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.734 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.734 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.734 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.734 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.734 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.734 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.734 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.734 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.734 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.734 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.734 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.734 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.734 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.734 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.734 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.734 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.734 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.734 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.734 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.734 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.734 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.734 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.734 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.734 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.734 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.734 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.734 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.734 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.734 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.734 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.734 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.734 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.734 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.734 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.734 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.734 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.734 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.734 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.734 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.734 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.734 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.734 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.734 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.734 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:06.734 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:06.734 22:54:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:06.734 22:54:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:06.734 22:54:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:06.734 22:54:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:06.734 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:06.734 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:04:06.734 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:06.734 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:06.734 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.734 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:06.734 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656056 kB' 'MemFree: 16414348 kB' 'MemUsed: 11241708 kB' 'SwapCached: 8 kB' 'Active: 5264252 kB' 'Inactive: 2179936 kB' 'Active(anon): 5109384 kB' 'Inactive(anon): 3100 kB' 'Active(file): 154868 kB' 'Inactive(file): 2176836 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7236492 kB' 'Mapped: 53628 kB' 'AnonPages: 207780 kB' 'Shmem: 4904780 kB' 'KernelStack: 8664 kB' 'PageTables: 2764 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 320144 kB' 'Slab: 675828 kB' 'SReclaimable: 320144 kB' 'SUnreclaim: 355684 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.735 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.736 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.736 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.736 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.736 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.736 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.736 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.736 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.736 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.736 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.736 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.736 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.736 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.736 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.736 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.736 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.736 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.736 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.736 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.736 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.736 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.736 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.736 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.736 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.736 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.736 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.736 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.736 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.736 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.736 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.736 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.736 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.736 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.736 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.736 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.736 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.736 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.736 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.736 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.736 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.736 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.736 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.736 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.736 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:06.736 22:54:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:06.736 22:54:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:06.736 22:54:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:06.736 22:54:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:06.736 22:54:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:06.736 22:54:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:06.736 node0=512 expecting 513 00:04:06.736 22:54:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:06.736 22:54:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:06.736 22:54:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:06.736 22:54:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:06.736 node1=513 expecting 512 00:04:06.736 22:54:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:06.736 00:04:06.736 real 0m4.252s 00:04:06.736 user 0m1.553s 00:04:06.736 sys 0m2.766s 00:04:06.736 22:54:58 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:06.736 22:54:58 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:06.736 ************************************ 00:04:06.736 END TEST odd_alloc 00:04:06.736 ************************************ 00:04:06.736 22:54:58 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:06.736 22:54:58 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:06.736 22:54:58 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:06.736 22:54:58 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:06.736 ************************************ 00:04:06.736 START TEST custom_alloc 00:04:06.736 ************************************ 00:04:06.736 22:54:58 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # custom_alloc 00:04:06.736 22:54:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:06.736 22:54:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:06.736 22:54:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:06.736 22:54:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:06.736 22:54:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:06.736 22:54:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:06.736 22:54:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:06.736 22:54:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:06.736 22:54:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:06.736 22:54:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:06.736 22:54:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:06.736 22:54:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:06.736 22:54:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:06.736 22:54:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:06.736 22:54:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:06.736 22:54:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:06.736 22:54:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:06.736 22:54:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:06.736 22:54:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:06.736 22:54:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:06.736 22:54:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:06.736 22:54:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:04:06.736 22:54:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:06.736 22:54:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:06.736 22:54:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:06.736 22:54:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:06.736 22:54:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:06.736 22:54:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:06.736 22:54:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:06.736 22:54:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:06.736 22:54:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:06.736 22:54:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:06.736 22:54:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:06.736 22:54:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:06.736 22:54:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:06.736 22:54:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:06.736 22:54:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:06.736 22:54:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:06.736 22:54:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:06.736 22:54:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:06.736 22:54:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:06.736 22:54:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:06.736 22:54:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:06.736 22:54:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:06.736 22:54:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:06.736 22:54:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:06.736 22:54:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:06.736 22:54:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:06.736 22:54:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:06.736 22:54:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:06.736 22:54:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:06.736 22:54:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:06.736 22:54:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:06.736 22:54:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:06.736 22:54:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:06.736 22:54:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:06.737 22:54:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:06.737 22:54:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:06.737 22:54:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:06.737 22:54:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:06.737 22:54:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:06.737 22:54:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:06.737 22:54:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:06.737 22:54:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:06.737 22:54:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:06.737 22:54:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:06.737 22:54:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:06.737 22:54:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:06.737 22:54:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:06.737 22:54:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:06.737 22:54:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:06.737 22:54:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:10.958 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:10.958 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:10.958 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:10.958 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:10.958 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:10.958 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:10.958 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:10.958 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:10.958 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:10.958 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:10.958 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:10.958 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:10.958 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:10.958 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:10.958 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:10.958 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:10.958 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:10.958 22:55:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:10.958 22:55:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:10.958 22:55:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:10.958 22:55:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:10.958 22:55:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:10.958 22:55:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:10.958 22:55:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:10.958 22:55:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:10.958 22:55:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:10.958 22:55:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:10.959 22:55:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:10.959 22:55:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:10.959 22:55:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:10.959 22:55:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:10.959 22:55:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.959 22:55:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.959 22:55:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.959 22:55:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.959 22:55:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.959 22:55:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.959 22:55:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.959 22:55:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 37686404 kB' 'MemAvailable: 40076344 kB' 'Buffers: 2704 kB' 'Cached: 14881388 kB' 'SwapCached: 292 kB' 'Active: 12127384 kB' 'Inactive: 3386636 kB' 'Active(anon): 11680604 kB' 'Inactive(anon): 1026052 kB' 'Active(file): 446780 kB' 'Inactive(file): 2360584 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8277500 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 632860 kB' 'Mapped: 165120 kB' 'Shmem: 12076728 kB' 'KReclaimable: 491820 kB' 'Slab: 1155932 kB' 'SReclaimable: 491820 kB' 'SUnreclaim: 664112 kB' 'KernelStack: 22288 kB' 'PageTables: 8612 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963336 kB' 'Committed_AS: 14281044 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219432 kB' 'VmallocChunk: 0 kB' 'Percpu: 91840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4171124 kB' 'DirectMap2M: 57380864 kB' 'DirectMap1G: 7340032 kB' 00:04:10.959 22:55:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.959 22:55:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.959 22:55:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.959 22:55:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.959 22:55:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.959 22:55:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.959 22:55:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.959 22:55:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.959 22:55:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.959 22:55:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.959 22:55:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.959 22:55:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.959 22:55:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.959 22:55:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.959 22:55:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.959 22:55:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.959 22:55:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.959 22:55:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.959 22:55:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.959 22:55:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.959 22:55:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.959 22:55:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.959 22:55:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.959 22:55:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.959 22:55:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.959 22:55:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.959 22:55:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.959 22:55:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.959 22:55:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.959 22:55:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.959 22:55:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.959 22:55:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.959 22:55:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.959 22:55:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.959 22:55:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.959 22:55:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.959 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.959 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.959 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.959 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.959 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.959 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.959 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.959 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.959 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.959 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.959 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.959 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.959 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.959 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.959 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.959 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.959 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.959 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.959 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.959 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.959 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.959 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.959 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.959 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.959 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.959 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.959 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.959 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.959 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.959 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.959 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.959 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.959 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.959 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.959 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.959 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.959 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.959 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.959 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.959 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.959 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.959 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.959 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.959 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.959 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.959 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.959 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.959 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.959 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.959 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.959 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.959 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.959 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.959 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.959 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.959 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.959 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.959 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.959 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.959 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.959 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.959 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.959 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.959 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.959 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.959 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.959 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.959 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.959 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.959 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.959 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.959 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.959 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.959 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.959 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.959 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.959 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.959 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.959 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.959 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.959 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.959 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.959 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.959 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.959 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 37686988 kB' 'MemAvailable: 40076928 kB' 'Buffers: 2704 kB' 'Cached: 14881392 kB' 'SwapCached: 292 kB' 'Active: 12127684 kB' 'Inactive: 3386636 kB' 'Active(anon): 11680904 kB' 'Inactive(anon): 1026052 kB' 'Active(file): 446780 kB' 'Inactive(file): 2360584 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8277500 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 633216 kB' 'Mapped: 165120 kB' 'Shmem: 12076732 kB' 'KReclaimable: 491820 kB' 'Slab: 1155916 kB' 'SReclaimable: 491820 kB' 'SUnreclaim: 664096 kB' 'KernelStack: 22272 kB' 'PageTables: 8548 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963336 kB' 'Committed_AS: 14281060 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219416 kB' 'VmallocChunk: 0 kB' 'Percpu: 91840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4171124 kB' 'DirectMap2M: 57380864 kB' 'DirectMap1G: 7340032 kB' 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.960 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.961 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.962 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.962 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.962 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.962 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.962 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.962 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 37687172 kB' 'MemAvailable: 40077112 kB' 'Buffers: 2704 kB' 'Cached: 14881392 kB' 'SwapCached: 292 kB' 'Active: 12127344 kB' 'Inactive: 3386636 kB' 'Active(anon): 11680564 kB' 'Inactive(anon): 1026052 kB' 'Active(file): 446780 kB' 'Inactive(file): 2360584 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8277500 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 632908 kB' 'Mapped: 165120 kB' 'Shmem: 12076732 kB' 'KReclaimable: 491820 kB' 'Slab: 1155916 kB' 'SReclaimable: 491820 kB' 'SUnreclaim: 664096 kB' 'KernelStack: 22272 kB' 'PageTables: 8548 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963336 kB' 'Committed_AS: 14281080 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219416 kB' 'VmallocChunk: 0 kB' 'Percpu: 91840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4171124 kB' 'DirectMap2M: 57380864 kB' 'DirectMap1G: 7340032 kB' 00:04:10.962 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.962 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.962 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.962 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.962 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.962 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.962 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.962 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.962 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.962 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.962 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.962 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.962 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.962 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.962 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.962 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.962 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.962 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.962 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.962 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.962 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.962 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.962 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.962 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.962 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.962 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.962 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.962 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.962 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.962 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.962 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.962 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.962 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.962 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.962 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.962 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.962 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.962 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.962 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.962 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.962 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.962 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.962 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.962 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.962 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.962 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.962 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.962 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.962 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.962 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.962 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.962 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.962 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.962 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.962 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.962 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.962 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.962 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.962 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.962 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.962 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.962 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.962 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.962 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.962 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.962 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.962 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.962 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.962 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.962 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.962 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.962 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.962 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.962 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.962 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.962 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.962 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.962 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.962 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.962 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.962 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.963 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.963 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.963 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.963 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.963 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.963 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.963 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.963 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.963 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.963 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.963 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.963 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.963 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.963 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.963 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.963 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.963 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.963 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.963 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.963 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.963 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.963 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.963 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.963 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.963 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.963 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.963 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.963 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.963 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.963 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.963 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.963 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.963 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.963 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.963 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.963 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.963 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.963 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.963 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.963 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.963 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.963 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.963 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.963 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.963 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.963 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.963 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.963 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.963 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.963 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.963 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.963 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.963 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.963 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.963 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.963 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.963 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.963 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.963 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.963 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.963 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.963 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.963 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.963 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.963 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.963 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.963 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.963 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.963 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.963 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.963 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.963 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.963 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.963 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.963 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.963 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.963 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.963 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.963 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.963 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.963 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.963 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.963 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.963 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.963 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.963 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.963 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.963 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.963 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.963 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.963 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.963 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.963 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.963 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.963 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.963 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.963 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.963 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:10.964 nr_hugepages=1536 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:10.964 resv_hugepages=0 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:10.964 surplus_hugepages=0 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:10.964 anon_hugepages=0 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 37685536 kB' 'MemAvailable: 40075476 kB' 'Buffers: 2704 kB' 'Cached: 14881432 kB' 'SwapCached: 292 kB' 'Active: 12129056 kB' 'Inactive: 3386636 kB' 'Active(anon): 11682276 kB' 'Inactive(anon): 1026052 kB' 'Active(file): 446780 kB' 'Inactive(file): 2360584 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8277500 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 634608 kB' 'Mapped: 165624 kB' 'Shmem: 12076772 kB' 'KReclaimable: 491820 kB' 'Slab: 1155916 kB' 'SReclaimable: 491820 kB' 'SUnreclaim: 664096 kB' 'KernelStack: 22256 kB' 'PageTables: 8504 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963336 kB' 'Committed_AS: 14283252 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219400 kB' 'VmallocChunk: 0 kB' 'Percpu: 91840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4171124 kB' 'DirectMap2M: 57380864 kB' 'DirectMap1G: 7340032 kB' 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.964 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 22304572 kB' 'MemUsed: 10334568 kB' 'SwapCached: 284 kB' 'Active: 6869132 kB' 'Inactive: 1206700 kB' 'Active(anon): 6577220 kB' 'Inactive(anon): 1022952 kB' 'Active(file): 291912 kB' 'Inactive(file): 183748 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7647824 kB' 'Mapped: 111632 kB' 'AnonPages: 431216 kB' 'Shmem: 7171880 kB' 'KernelStack: 13576 kB' 'PageTables: 5700 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 171676 kB' 'Slab: 480212 kB' 'SReclaimable: 171676 kB' 'SUnreclaim: 308536 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.965 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656056 kB' 'MemFree: 15375872 kB' 'MemUsed: 12280184 kB' 'SwapCached: 8 kB' 'Active: 5264212 kB' 'Inactive: 2179936 kB' 'Active(anon): 5109344 kB' 'Inactive(anon): 3100 kB' 'Active(file): 154868 kB' 'Inactive(file): 2176836 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7236644 kB' 'Mapped: 54144 kB' 'AnonPages: 207620 kB' 'Shmem: 4904932 kB' 'KernelStack: 8712 kB' 'PageTables: 2880 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 320144 kB' 'Slab: 675700 kB' 'SReclaimable: 320144 kB' 'SUnreclaim: 355556 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.966 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:10.967 node0=512 expecting 512 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:10.967 node1=1024 expecting 1024 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:10.967 00:04:10.967 real 0m4.306s 00:04:10.967 user 0m1.552s 00:04:10.967 sys 0m2.813s 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:10.967 22:55:03 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:10.967 ************************************ 00:04:10.967 END TEST custom_alloc 00:04:10.967 ************************************ 00:04:11.226 22:55:03 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:11.226 22:55:03 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:11.226 22:55:03 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:11.226 22:55:03 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:11.226 ************************************ 00:04:11.226 START TEST no_shrink_alloc 00:04:11.226 ************************************ 00:04:11.226 22:55:03 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # no_shrink_alloc 00:04:11.226 22:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:11.226 22:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:11.226 22:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:11.226 22:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:11.226 22:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:11.226 22:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:11.226 22:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:11.226 22:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:11.226 22:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:11.226 22:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:11.226 22:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:11.226 22:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:11.226 22:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:11.226 22:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:11.226 22:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:11.226 22:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:11.226 22:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:11.226 22:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:11.226 22:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:11.226 22:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:11.226 22:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:11.226 22:55:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:15.425 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:15.425 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:15.425 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:15.425 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:15.425 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:15.425 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:15.425 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:15.425 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:15.425 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:15.425 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:15.425 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:15.425 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:15.425 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:15.425 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:15.425 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:15.425 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:15.425 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 38761448 kB' 'MemAvailable: 41151388 kB' 'Buffers: 2704 kB' 'Cached: 14881548 kB' 'SwapCached: 292 kB' 'Active: 12129392 kB' 'Inactive: 3386636 kB' 'Active(anon): 11682612 kB' 'Inactive(anon): 1026052 kB' 'Active(file): 446780 kB' 'Inactive(file): 2360584 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8277500 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 634596 kB' 'Mapped: 165192 kB' 'Shmem: 12076888 kB' 'KReclaimable: 491820 kB' 'Slab: 1155752 kB' 'SReclaimable: 491820 kB' 'SUnreclaim: 663932 kB' 'KernelStack: 22352 kB' 'PageTables: 8672 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 14282768 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219624 kB' 'VmallocChunk: 0 kB' 'Percpu: 91840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4171124 kB' 'DirectMap2M: 57380864 kB' 'DirectMap1G: 7340032 kB' 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.426 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 38762308 kB' 'MemAvailable: 41152248 kB' 'Buffers: 2704 kB' 'Cached: 14881552 kB' 'SwapCached: 292 kB' 'Active: 12129380 kB' 'Inactive: 3386636 kB' 'Active(anon): 11682600 kB' 'Inactive(anon): 1026052 kB' 'Active(file): 446780 kB' 'Inactive(file): 2360584 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8277500 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 634656 kB' 'Mapped: 165136 kB' 'Shmem: 12076892 kB' 'KReclaimable: 491820 kB' 'Slab: 1155868 kB' 'SReclaimable: 491820 kB' 'SUnreclaim: 664048 kB' 'KernelStack: 22304 kB' 'PageTables: 8776 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 14284284 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219544 kB' 'VmallocChunk: 0 kB' 'Percpu: 91840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4171124 kB' 'DirectMap2M: 57380864 kB' 'DirectMap1G: 7340032 kB' 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.427 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.428 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 38764524 kB' 'MemAvailable: 41154464 kB' 'Buffers: 2704 kB' 'Cached: 14881568 kB' 'SwapCached: 292 kB' 'Active: 12129272 kB' 'Inactive: 3386636 kB' 'Active(anon): 11682492 kB' 'Inactive(anon): 1026052 kB' 'Active(file): 446780 kB' 'Inactive(file): 2360584 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8277500 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 634632 kB' 'Mapped: 165136 kB' 'Shmem: 12076908 kB' 'KReclaimable: 491820 kB' 'Slab: 1155868 kB' 'SReclaimable: 491820 kB' 'SUnreclaim: 664048 kB' 'KernelStack: 22448 kB' 'PageTables: 8632 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 14284436 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219640 kB' 'VmallocChunk: 0 kB' 'Percpu: 91840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4171124 kB' 'DirectMap2M: 57380864 kB' 'DirectMap1G: 7340032 kB' 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.429 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.430 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.430 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.430 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.430 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.430 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.430 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.430 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.430 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.430 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.430 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.430 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.430 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.430 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.430 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.430 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.430 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.430 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.430 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.430 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.430 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.430 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.430 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.430 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.430 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.430 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.430 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.430 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.430 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.430 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.430 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.430 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.430 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.430 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.430 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.430 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.430 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.430 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.430 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.430 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.430 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.430 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.430 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.430 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.430 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.430 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.430 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.430 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.430 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.430 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.430 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.430 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.430 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.430 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.430 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.430 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.430 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.430 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.430 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.430 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.430 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.430 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.430 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.430 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.430 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.430 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.430 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.430 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.430 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.430 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.430 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.430 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.430 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.430 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.430 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.430 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.430 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.430 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.430 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.430 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.430 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.430 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.430 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.430 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.430 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.430 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.430 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.430 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.430 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.430 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.430 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.430 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.430 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.430 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.430 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.430 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.430 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.430 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.430 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.430 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.431 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.431 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.431 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.431 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.431 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.431 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.431 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.431 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.431 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.431 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.431 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.431 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.431 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.431 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.431 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.431 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.431 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.431 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.431 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.431 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.431 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.431 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.431 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.431 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.431 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.431 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.431 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.431 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.431 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.431 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.431 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.431 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.431 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.431 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.431 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.431 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.431 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.431 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.431 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.431 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.431 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.431 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.431 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.431 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.431 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.431 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.431 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.431 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.431 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.431 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.431 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.431 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.431 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.431 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.431 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.431 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.431 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.431 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.431 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.431 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.431 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.431 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.431 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.431 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.431 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.431 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.431 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.431 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.431 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.431 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.431 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.431 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.431 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.431 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.431 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.431 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.431 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.431 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.431 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:15.431 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:15.431 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:15.431 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:15.431 nr_hugepages=1024 00:04:15.431 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:15.431 resv_hugepages=0 00:04:15.431 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:15.431 surplus_hugepages=0 00:04:15.431 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:15.431 anon_hugepages=0 00:04:15.431 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:15.431 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:15.431 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:15.431 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:15.431 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:15.431 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:15.431 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:15.431 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.431 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:15.431 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:15.431 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.431 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.431 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.431 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.431 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 38763652 kB' 'MemAvailable: 41153592 kB' 'Buffers: 2704 kB' 'Cached: 14881604 kB' 'SwapCached: 292 kB' 'Active: 12129564 kB' 'Inactive: 3386636 kB' 'Active(anon): 11682784 kB' 'Inactive(anon): 1026052 kB' 'Active(file): 446780 kB' 'Inactive(file): 2360584 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8277500 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 634796 kB' 'Mapped: 165136 kB' 'Shmem: 12076944 kB' 'KReclaimable: 491820 kB' 'Slab: 1155868 kB' 'SReclaimable: 491820 kB' 'SUnreclaim: 664048 kB' 'KernelStack: 22432 kB' 'PageTables: 8868 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 14284828 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219608 kB' 'VmallocChunk: 0 kB' 'Percpu: 91840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4171124 kB' 'DirectMap2M: 57380864 kB' 'DirectMap1G: 7340032 kB' 00:04:15.431 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.431 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.431 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.431 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.431 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.431 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.432 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 21292768 kB' 'MemUsed: 11346372 kB' 'SwapCached: 284 kB' 'Active: 6865836 kB' 'Inactive: 1206700 kB' 'Active(anon): 6573924 kB' 'Inactive(anon): 1022952 kB' 'Active(file): 291912 kB' 'Inactive(file): 183748 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7647880 kB' 'Mapped: 111480 kB' 'AnonPages: 427780 kB' 'Shmem: 7171936 kB' 'KernelStack: 13880 kB' 'PageTables: 6276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 171676 kB' 'Slab: 480352 kB' 'SReclaimable: 171676 kB' 'SUnreclaim: 308676 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.433 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.434 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.435 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.435 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.435 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.435 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.435 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.435 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:15.435 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:15.435 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:15.435 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:15.435 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:15.435 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:15.435 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:15.435 node0=1024 expecting 1024 00:04:15.435 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:15.435 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:15.435 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:15.435 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:15.435 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:15.435 22:55:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:19.634 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:19.634 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:19.634 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:19.634 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:19.634 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:19.634 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:19.634 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:19.634 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:19.634 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:19.634 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:19.634 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:19.634 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:19.634 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:19.634 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:19.634 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:19.634 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:19.634 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:19.634 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:19.634 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:19.634 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:19.634 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:19.634 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:19.634 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:19.634 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:19.634 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:19.634 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:19.634 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:19.634 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:19.634 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:19.634 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:19.634 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:19.634 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.634 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:19.634 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:19.634 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.634 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.634 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.634 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.634 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 38764204 kB' 'MemAvailable: 41154128 kB' 'Buffers: 2704 kB' 'Cached: 14881708 kB' 'SwapCached: 292 kB' 'Active: 12131528 kB' 'Inactive: 3386636 kB' 'Active(anon): 11684748 kB' 'Inactive(anon): 1026052 kB' 'Active(file): 446780 kB' 'Inactive(file): 2360584 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8277500 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 636332 kB' 'Mapped: 165244 kB' 'Shmem: 12077048 kB' 'KReclaimable: 491788 kB' 'Slab: 1155988 kB' 'SReclaimable: 491788 kB' 'SUnreclaim: 664200 kB' 'KernelStack: 22336 kB' 'PageTables: 8688 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 14282816 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219368 kB' 'VmallocChunk: 0 kB' 'Percpu: 91840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4171124 kB' 'DirectMap2M: 57380864 kB' 'DirectMap1G: 7340032 kB' 00:04:19.634 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.634 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.634 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.634 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.634 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.634 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.634 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.634 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.634 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.634 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.634 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.634 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.634 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.634 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.634 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.634 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.634 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.634 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.634 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.634 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.634 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.634 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.634 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.634 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.634 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.634 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.634 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.634 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.634 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.634 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.634 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.634 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.634 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.634 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.634 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.634 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.634 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.634 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.634 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.635 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.636 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.636 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:19.636 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:19.636 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:19.636 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:19.636 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:19.636 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:19.636 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:19.636 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:19.636 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.636 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:19.636 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:19.636 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.636 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.636 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.636 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.636 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 38766216 kB' 'MemAvailable: 41156140 kB' 'Buffers: 2704 kB' 'Cached: 14881708 kB' 'SwapCached: 292 kB' 'Active: 12130536 kB' 'Inactive: 3386636 kB' 'Active(anon): 11683756 kB' 'Inactive(anon): 1026052 kB' 'Active(file): 446780 kB' 'Inactive(file): 2360584 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8277500 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 635792 kB' 'Mapped: 165140 kB' 'Shmem: 12077048 kB' 'KReclaimable: 491788 kB' 'Slab: 1155944 kB' 'SReclaimable: 491788 kB' 'SUnreclaim: 664156 kB' 'KernelStack: 22304 kB' 'PageTables: 8552 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 14282832 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219352 kB' 'VmallocChunk: 0 kB' 'Percpu: 91840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4171124 kB' 'DirectMap2M: 57380864 kB' 'DirectMap1G: 7340032 kB' 00:04:19.636 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.636 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.636 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.636 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.636 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.636 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.636 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.636 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.636 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.636 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.636 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.636 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.636 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.636 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.636 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.636 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.636 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.636 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.636 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.636 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.636 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.636 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.636 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.636 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.636 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.636 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.636 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.636 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.636 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.636 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.636 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.636 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.636 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.636 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.636 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.636 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.636 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.636 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.636 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.636 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.636 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.636 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.636 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.636 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.636 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.636 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.636 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.636 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.636 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.636 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.636 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.636 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.636 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.636 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.636 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.636 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.636 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.636 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.636 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.636 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.636 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.636 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.636 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.636 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.636 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.636 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.636 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.636 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.636 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.636 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.636 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.636 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.636 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.636 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.636 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.636 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.636 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.636 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.636 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.636 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.636 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.636 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.636 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.636 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.636 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.636 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.636 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.636 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.637 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:19.638 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:19.638 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.638 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.638 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.638 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.638 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 38766904 kB' 'MemAvailable: 41156828 kB' 'Buffers: 2704 kB' 'Cached: 14881712 kB' 'SwapCached: 292 kB' 'Active: 12130236 kB' 'Inactive: 3386636 kB' 'Active(anon): 11683456 kB' 'Inactive(anon): 1026052 kB' 'Active(file): 446780 kB' 'Inactive(file): 2360584 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8277500 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 635476 kB' 'Mapped: 165140 kB' 'Shmem: 12077052 kB' 'KReclaimable: 491788 kB' 'Slab: 1155928 kB' 'SReclaimable: 491788 kB' 'SUnreclaim: 664140 kB' 'KernelStack: 22304 kB' 'PageTables: 8552 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 14282856 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219352 kB' 'VmallocChunk: 0 kB' 'Percpu: 91840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4171124 kB' 'DirectMap2M: 57380864 kB' 'DirectMap1G: 7340032 kB' 00:04:19.638 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.638 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.638 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.638 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.638 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.638 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.638 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.638 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.638 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.638 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.638 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.638 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.638 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.638 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.638 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.638 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.638 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.638 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.638 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.638 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.638 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.638 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.638 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.638 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.638 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.638 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.638 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.638 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.638 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.638 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.638 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.638 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.638 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.638 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.638 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.638 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.638 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.638 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.638 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.638 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.638 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.638 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.638 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.638 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.638 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.638 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.638 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.638 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.638 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.638 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.638 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.638 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.638 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.638 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.638 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.638 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.638 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.638 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.638 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.638 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.638 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.638 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.638 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.638 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.638 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.638 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.638 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.638 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.638 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.638 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.638 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.638 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.638 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.638 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.638 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.638 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.638 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.638 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.638 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.638 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.638 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.638 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.638 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.638 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.638 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.638 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:19.639 nr_hugepages=1024 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:19.639 resv_hugepages=0 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:19.639 surplus_hugepages=0 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:19.639 anon_hugepages=0 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:19.639 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 38765644 kB' 'MemAvailable: 41155568 kB' 'Buffers: 2704 kB' 'Cached: 14881748 kB' 'SwapCached: 292 kB' 'Active: 12130600 kB' 'Inactive: 3386636 kB' 'Active(anon): 11683820 kB' 'Inactive(anon): 1026052 kB' 'Active(file): 446780 kB' 'Inactive(file): 2360584 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8277500 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 635788 kB' 'Mapped: 165140 kB' 'Shmem: 12077088 kB' 'KReclaimable: 491788 kB' 'Slab: 1155928 kB' 'SReclaimable: 491788 kB' 'SUnreclaim: 664140 kB' 'KernelStack: 22304 kB' 'PageTables: 8552 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 14282876 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219368 kB' 'VmallocChunk: 0 kB' 'Percpu: 91840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4171124 kB' 'DirectMap2M: 57380864 kB' 'DirectMap1G: 7340032 kB' 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.640 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 21299372 kB' 'MemUsed: 11339768 kB' 'SwapCached: 284 kB' 'Active: 6864748 kB' 'Inactive: 1206700 kB' 'Active(anon): 6572836 kB' 'Inactive(anon): 1022952 kB' 'Active(file): 291912 kB' 'Inactive(file): 183748 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7647888 kB' 'Mapped: 111480 kB' 'AnonPages: 426664 kB' 'Shmem: 7171944 kB' 'KernelStack: 13592 kB' 'PageTables: 5672 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 171676 kB' 'Slab: 480596 kB' 'SReclaimable: 171676 kB' 'SUnreclaim: 308920 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.641 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.642 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.643 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.643 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.643 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.643 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.643 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.643 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.643 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:19.643 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:19.643 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:19.643 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:19.643 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:19.643 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:19.643 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:19.643 node0=1024 expecting 1024 00:04:19.643 22:55:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:19.643 00:04:19.643 real 0m8.321s 00:04:19.643 user 0m2.994s 00:04:19.643 sys 0m5.412s 00:04:19.643 22:55:11 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:19.643 22:55:11 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:19.643 ************************************ 00:04:19.643 END TEST no_shrink_alloc 00:04:19.643 ************************************ 00:04:19.643 22:55:11 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:19.643 22:55:11 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:19.643 22:55:11 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:19.643 22:55:11 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:19.643 22:55:11 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:19.643 22:55:11 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:19.643 22:55:11 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:19.643 22:55:11 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:19.643 22:55:11 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:19.643 22:55:11 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:19.643 22:55:11 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:19.643 22:55:11 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:19.643 22:55:11 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:19.643 22:55:11 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:19.643 00:04:19.643 real 0m32.036s 00:04:19.643 user 0m11.095s 00:04:19.643 sys 0m19.969s 00:04:19.643 22:55:11 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:19.643 22:55:11 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:19.643 ************************************ 00:04:19.643 END TEST hugepages 00:04:19.643 ************************************ 00:04:19.643 22:55:11 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:04:19.643 22:55:11 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:19.643 22:55:11 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:19.643 22:55:11 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:19.643 ************************************ 00:04:19.643 START TEST driver 00:04:19.643 ************************************ 00:04:19.643 22:55:11 setup.sh.driver -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:04:19.643 * Looking for test storage... 00:04:19.643 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:04:19.643 22:55:11 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:19.643 22:55:11 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:19.643 22:55:11 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:24.919 22:55:17 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:24.919 22:55:17 setup.sh.driver -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:24.919 22:55:17 setup.sh.driver -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:24.919 22:55:17 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:24.919 ************************************ 00:04:24.919 START TEST guess_driver 00:04:24.919 ************************************ 00:04:24.919 22:55:17 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # guess_driver 00:04:24.919 22:55:17 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:24.919 22:55:17 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:24.919 22:55:17 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:24.919 22:55:17 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:24.919 22:55:17 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:24.919 22:55:17 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:24.919 22:55:17 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:24.919 22:55:17 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:24.919 22:55:17 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:24.919 22:55:17 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 256 > 0 )) 00:04:24.919 22:55:17 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:24.919 22:55:17 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:24.919 22:55:17 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:24.919 22:55:17 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:24.919 22:55:17 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:24.919 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:24.919 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:24.919 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:24.919 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:24.919 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:24.919 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:24.919 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:24.919 22:55:17 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:24.919 22:55:17 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:24.919 22:55:17 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:24.919 22:55:17 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:24.919 22:55:17 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:24.919 Looking for driver=vfio-pci 00:04:24.919 22:55:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:24.919 22:55:17 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:24.919 22:55:17 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:24.919 22:55:17 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:29.117 22:55:21 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:29.117 22:55:21 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:29.117 22:55:21 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:29.117 22:55:21 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:29.117 22:55:21 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:29.117 22:55:21 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:29.117 22:55:21 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:29.117 22:55:21 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:29.117 22:55:21 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:29.117 22:55:21 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:29.117 22:55:21 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:29.117 22:55:21 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:29.117 22:55:21 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:29.117 22:55:21 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:29.117 22:55:21 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:29.117 22:55:21 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:29.117 22:55:21 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:29.117 22:55:21 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:29.117 22:55:21 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:29.117 22:55:21 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:29.117 22:55:21 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:29.117 22:55:21 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:29.117 22:55:21 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:29.117 22:55:21 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:29.117 22:55:21 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:29.117 22:55:21 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:29.117 22:55:21 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:29.117 22:55:21 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:29.117 22:55:21 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:29.117 22:55:21 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:29.117 22:55:21 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:29.117 22:55:21 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:29.117 22:55:21 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:29.376 22:55:21 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:29.376 22:55:21 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:29.376 22:55:21 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:29.376 22:55:21 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:29.376 22:55:21 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:29.376 22:55:21 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:29.376 22:55:21 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:29.376 22:55:21 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:29.376 22:55:21 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:29.376 22:55:21 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:29.376 22:55:21 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:29.376 22:55:21 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:29.376 22:55:21 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:29.376 22:55:21 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:29.376 22:55:21 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:31.284 22:55:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:31.284 22:55:23 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:31.284 22:55:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:31.284 22:55:23 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:31.284 22:55:23 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:31.284 22:55:23 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:31.284 22:55:23 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:36.562 00:04:36.562 real 0m11.522s 00:04:36.562 user 0m3.137s 00:04:36.562 sys 0m6.026s 00:04:36.562 22:55:28 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:36.562 22:55:28 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:36.562 ************************************ 00:04:36.562 END TEST guess_driver 00:04:36.562 ************************************ 00:04:36.562 00:04:36.562 real 0m16.991s 00:04:36.562 user 0m4.589s 00:04:36.562 sys 0m9.170s 00:04:36.562 22:55:28 setup.sh.driver -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:36.562 22:55:28 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:36.562 ************************************ 00:04:36.562 END TEST driver 00:04:36.562 ************************************ 00:04:36.562 22:55:28 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:04:36.562 22:55:28 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:36.562 22:55:28 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:36.562 22:55:28 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:36.562 ************************************ 00:04:36.562 START TEST devices 00:04:36.562 ************************************ 00:04:36.562 22:55:28 setup.sh.devices -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:04:36.822 * Looking for test storage... 00:04:36.822 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:04:36.822 22:55:28 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:36.822 22:55:28 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:36.822 22:55:28 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:36.822 22:55:28 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:42.096 22:55:33 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:42.096 22:55:33 setup.sh.devices -- common/autotest_common.sh@1668 -- # zoned_devs=() 00:04:42.096 22:55:33 setup.sh.devices -- common/autotest_common.sh@1668 -- # local -gA zoned_devs 00:04:42.096 22:55:33 setup.sh.devices -- common/autotest_common.sh@1669 -- # local nvme bdf 00:04:42.096 22:55:33 setup.sh.devices -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:04:42.096 22:55:33 setup.sh.devices -- common/autotest_common.sh@1672 -- # is_block_zoned nvme0n1 00:04:42.096 22:55:33 setup.sh.devices -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:04:42.096 22:55:33 setup.sh.devices -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:42.096 22:55:33 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:04:42.096 22:55:33 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:42.096 22:55:33 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:42.096 22:55:33 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:42.096 22:55:33 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:42.096 22:55:33 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:42.096 22:55:33 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:42.096 22:55:33 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:42.096 22:55:33 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:42.096 22:55:33 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:d8:00.0 00:04:42.096 22:55:33 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:04:42.096 22:55:33 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:42.096 22:55:33 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:42.096 22:55:33 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:42.096 No valid GPT data, bailing 00:04:42.096 22:55:33 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:42.096 22:55:33 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:42.096 22:55:33 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:42.096 22:55:33 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:42.096 22:55:33 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:42.096 22:55:33 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:42.096 22:55:33 setup.sh.devices -- setup/common.sh@80 -- # echo 1600321314816 00:04:42.096 22:55:33 setup.sh.devices -- setup/devices.sh@204 -- # (( 1600321314816 >= min_disk_size )) 00:04:42.096 22:55:33 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:42.096 22:55:33 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:d8:00.0 00:04:42.096 22:55:33 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:42.096 22:55:33 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:42.096 22:55:33 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:42.096 22:55:33 setup.sh.devices -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:42.096 22:55:33 setup.sh.devices -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:42.096 22:55:33 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:42.096 ************************************ 00:04:42.096 START TEST nvme_mount 00:04:42.096 ************************************ 00:04:42.096 22:55:33 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # nvme_mount 00:04:42.096 22:55:33 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:42.096 22:55:33 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:42.096 22:55:33 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:42.096 22:55:33 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:42.096 22:55:33 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:42.096 22:55:33 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:42.096 22:55:33 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:42.096 22:55:33 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:42.096 22:55:33 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:42.096 22:55:33 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:42.096 22:55:33 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:42.096 22:55:33 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:42.096 22:55:33 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:42.096 22:55:33 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:42.096 22:55:33 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:42.096 22:55:33 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:42.096 22:55:33 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:42.096 22:55:33 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:42.096 22:55:33 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:42.356 Creating new GPT entries in memory. 00:04:42.356 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:42.356 other utilities. 00:04:42.356 22:55:34 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:42.356 22:55:34 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:42.356 22:55:34 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:42.356 22:55:34 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:42.356 22:55:34 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:43.737 Creating new GPT entries in memory. 00:04:43.737 The operation has completed successfully. 00:04:43.737 22:55:35 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:43.737 22:55:35 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:43.737 22:55:35 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 4107498 00:04:43.737 22:55:35 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:43.737 22:55:35 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:43.737 22:55:35 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:43.737 22:55:35 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:43.737 22:55:35 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:43.737 22:55:35 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:43.737 22:55:35 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:43.737 22:55:35 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:43.737 22:55:35 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:43.737 22:55:35 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:43.737 22:55:35 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:43.737 22:55:35 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:43.737 22:55:35 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:43.737 22:55:35 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:43.737 22:55:35 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:43.737 22:55:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.737 22:55:35 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:43.737 22:55:35 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:43.737 22:55:35 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:43.737 22:55:35 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:47.030 22:55:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:47.030 22:55:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.030 22:55:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:47.030 22:55:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.030 22:55:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:47.030 22:55:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.030 22:55:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:47.030 22:55:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.030 22:55:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:47.030 22:55:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.030 22:55:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:47.030 22:55:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.030 22:55:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:47.030 22:55:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.030 22:55:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:47.030 22:55:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.030 22:55:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:47.030 22:55:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.030 22:55:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:47.030 22:55:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.030 22:55:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:47.030 22:55:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.030 22:55:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:47.030 22:55:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.030 22:55:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:47.030 22:55:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.030 22:55:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:47.030 22:55:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.030 22:55:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:47.030 22:55:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.030 22:55:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:47.030 22:55:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.030 22:55:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:47.030 22:55:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:47.030 22:55:39 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:47.030 22:55:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.030 22:55:39 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:47.030 22:55:39 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:47.030 22:55:39 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:47.030 22:55:39 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:47.030 22:55:39 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:47.030 22:55:39 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:47.030 22:55:39 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:47.030 22:55:39 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:47.030 22:55:39 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:47.030 22:55:39 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:47.030 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:47.030 22:55:39 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:47.030 22:55:39 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:47.290 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:47.290 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:04:47.290 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:47.290 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:47.290 22:55:39 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:47.290 22:55:39 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:47.290 22:55:39 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:47.290 22:55:39 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:47.290 22:55:39 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:47.290 22:55:39 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:47.290 22:55:39 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:47.290 22:55:39 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:47.290 22:55:39 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:47.290 22:55:39 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:47.290 22:55:39 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:47.290 22:55:39 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:47.290 22:55:39 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:47.290 22:55:39 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:47.290 22:55:39 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:47.290 22:55:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.290 22:55:39 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:47.290 22:55:39 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:47.290 22:55:39 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:47.290 22:55:39 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:51.548 22:55:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:51.548 22:55:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.548 22:55:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:51.548 22:55:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.548 22:55:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:51.548 22:55:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.548 22:55:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:51.548 22:55:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.548 22:55:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:51.548 22:55:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.548 22:55:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:51.548 22:55:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.548 22:55:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:51.548 22:55:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.548 22:55:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:51.548 22:55:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.548 22:55:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:51.548 22:55:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.548 22:55:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:51.548 22:55:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.548 22:55:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:51.548 22:55:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.548 22:55:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:51.548 22:55:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.548 22:55:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:51.548 22:55:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.548 22:55:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:51.548 22:55:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.548 22:55:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:51.548 22:55:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.548 22:55:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:51.548 22:55:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.548 22:55:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:51.548 22:55:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:51.548 22:55:43 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:51.548 22:55:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.548 22:55:43 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:51.548 22:55:43 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:51.548 22:55:43 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:51.548 22:55:43 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:51.548 22:55:43 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:51.548 22:55:43 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:51.548 22:55:43 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:d8:00.0 data@nvme0n1 '' '' 00:04:51.548 22:55:43 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:51.548 22:55:43 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:51.548 22:55:43 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:51.548 22:55:43 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:51.548 22:55:43 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:51.548 22:55:43 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:51.548 22:55:43 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:51.548 22:55:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.548 22:55:43 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:51.548 22:55:43 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:51.548 22:55:43 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:51.548 22:55:43 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:54.840 22:55:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:54.840 22:55:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.840 22:55:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:54.840 22:55:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.840 22:55:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:54.840 22:55:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.840 22:55:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:54.840 22:55:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.840 22:55:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:54.840 22:55:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.840 22:55:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:54.840 22:55:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.840 22:55:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:54.840 22:55:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.840 22:55:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:54.840 22:55:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.840 22:55:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:54.840 22:55:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.840 22:55:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:54.840 22:55:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.840 22:55:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:54.840 22:55:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.840 22:55:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:54.840 22:55:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.840 22:55:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:54.840 22:55:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.840 22:55:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:54.840 22:55:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.840 22:55:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:54.840 22:55:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.840 22:55:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:54.840 22:55:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.840 22:55:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:54.840 22:55:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:54.840 22:55:47 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:54.840 22:55:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.100 22:55:47 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:55.100 22:55:47 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:55.100 22:55:47 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:55.100 22:55:47 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:55.100 22:55:47 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:55.100 22:55:47 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:55.100 22:55:47 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:55.100 22:55:47 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:55.100 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:55.100 00:04:55.100 real 0m13.682s 00:04:55.100 user 0m3.812s 00:04:55.100 sys 0m7.439s 00:04:55.100 22:55:47 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:55.100 22:55:47 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:55.100 ************************************ 00:04:55.100 END TEST nvme_mount 00:04:55.100 ************************************ 00:04:55.100 22:55:47 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:55.100 22:55:47 setup.sh.devices -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:55.100 22:55:47 setup.sh.devices -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:55.100 22:55:47 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:55.100 ************************************ 00:04:55.100 START TEST dm_mount 00:04:55.100 ************************************ 00:04:55.100 22:55:47 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # dm_mount 00:04:55.100 22:55:47 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:55.100 22:55:47 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:55.100 22:55:47 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:55.100 22:55:47 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:55.100 22:55:47 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:55.100 22:55:47 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:55.100 22:55:47 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:55.100 22:55:47 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:55.100 22:55:47 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:55.100 22:55:47 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:55.100 22:55:47 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:55.100 22:55:47 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:55.100 22:55:47 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:55.100 22:55:47 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:55.100 22:55:47 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:55.100 22:55:47 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:55.100 22:55:47 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:55.100 22:55:47 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:55.100 22:55:47 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:55.100 22:55:47 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:55.100 22:55:47 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:56.478 Creating new GPT entries in memory. 00:04:56.478 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:56.478 other utilities. 00:04:56.478 22:55:48 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:56.478 22:55:48 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:56.478 22:55:48 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:56.478 22:55:48 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:56.478 22:55:48 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:57.414 Creating new GPT entries in memory. 00:04:57.414 The operation has completed successfully. 00:04:57.414 22:55:49 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:57.414 22:55:49 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:57.414 22:55:49 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:57.414 22:55:49 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:57.414 22:55:49 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:58.353 The operation has completed successfully. 00:04:58.353 22:55:50 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:58.353 22:55:50 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:58.353 22:55:50 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 4112666 00:04:58.353 22:55:50 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:58.353 22:55:50 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:58.354 22:55:50 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:58.354 22:55:50 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:58.354 22:55:50 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:58.354 22:55:50 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:58.354 22:55:50 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:58.354 22:55:50 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:58.354 22:55:50 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:58.354 22:55:50 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:58.354 22:55:50 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:58.354 22:55:50 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:58.354 22:55:50 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:58.354 22:55:50 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:58.354 22:55:50 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount size= 00:04:58.354 22:55:50 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:58.354 22:55:50 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:58.354 22:55:50 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:58.354 22:55:50 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:58.354 22:55:50 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:d8:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:58.354 22:55:50 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:58.354 22:55:50 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:58.354 22:55:50 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:58.354 22:55:50 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:58.354 22:55:50 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:58.354 22:55:50 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:58.354 22:55:50 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:58.354 22:55:50 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:58.354 22:55:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.354 22:55:50 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:58.354 22:55:50 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:58.354 22:55:50 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:58.354 22:55:50 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:05:01.645 22:55:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:01.645 22:55:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.645 22:55:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:01.645 22:55:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.645 22:55:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:01.645 22:55:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.645 22:55:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:01.645 22:55:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.645 22:55:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:01.645 22:55:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.645 22:55:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:01.645 22:55:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.645 22:55:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:01.645 22:55:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.645 22:55:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:01.645 22:55:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.645 22:55:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:01.645 22:55:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.645 22:55:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:01.645 22:55:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.646 22:55:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:01.646 22:55:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.646 22:55:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:01.646 22:55:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.646 22:55:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:01.646 22:55:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.646 22:55:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:01.646 22:55:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.646 22:55:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:01.646 22:55:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.646 22:55:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:01.646 22:55:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.906 22:55:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:01.906 22:55:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:01.906 22:55:53 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:01.906 22:55:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.906 22:55:54 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:01.906 22:55:54 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:01.906 22:55:54 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:01.906 22:55:54 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:01.906 22:55:54 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:01.906 22:55:54 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:01.906 22:55:54 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:d8:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:01.906 22:55:54 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:05:01.906 22:55:54 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:01.906 22:55:54 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:01.906 22:55:54 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:01.906 22:55:54 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:01.906 22:55:54 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:01.906 22:55:54 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:01.906 22:55:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.906 22:55:54 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:05:01.906 22:55:54 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:01.906 22:55:54 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:01.906 22:55:54 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:05:06.101 22:55:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:06.101 22:55:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.101 22:55:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:06.101 22:55:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.101 22:55:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:06.101 22:55:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.101 22:55:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:06.101 22:55:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.101 22:55:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:06.101 22:55:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.101 22:55:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:06.101 22:55:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.101 22:55:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:06.101 22:55:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.101 22:55:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:06.101 22:55:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.101 22:55:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:06.101 22:55:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.101 22:55:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:06.101 22:55:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.101 22:55:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:06.101 22:55:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.101 22:55:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:06.101 22:55:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.101 22:55:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:06.101 22:55:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.101 22:55:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:06.101 22:55:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.101 22:55:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:06.101 22:55:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.101 22:55:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:06.101 22:55:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.101 22:55:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:06.101 22:55:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:06.101 22:55:57 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:06.101 22:55:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.101 22:55:57 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:06.101 22:55:57 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:06.101 22:55:57 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:06.101 22:55:57 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:06.101 22:55:57 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:06.101 22:55:57 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:06.101 22:55:57 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:06.101 22:55:58 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:06.101 22:55:58 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:06.101 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:06.101 22:55:58 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:06.101 22:55:58 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:06.101 00:05:06.101 real 0m10.737s 00:05:06.101 user 0m2.455s 00:05:06.101 sys 0m5.081s 00:05:06.101 22:55:58 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:06.101 22:55:58 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:06.101 ************************************ 00:05:06.101 END TEST dm_mount 00:05:06.101 ************************************ 00:05:06.101 22:55:58 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:06.101 22:55:58 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:06.101 22:55:58 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:06.102 22:55:58 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:06.102 22:55:58 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:06.102 22:55:58 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:06.102 22:55:58 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:06.102 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:06.102 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:05:06.102 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:06.102 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:06.102 22:55:58 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:06.102 22:55:58 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:06.102 22:55:58 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:06.102 22:55:58 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:06.102 22:55:58 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:06.102 22:55:58 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:06.102 22:55:58 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:06.102 00:05:06.102 real 0m29.549s 00:05:06.102 user 0m7.954s 00:05:06.102 sys 0m15.897s 00:05:06.102 22:55:58 setup.sh.devices -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:06.102 22:55:58 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:06.102 ************************************ 00:05:06.102 END TEST devices 00:05:06.102 ************************************ 00:05:06.361 00:05:06.361 real 1m47.525s 00:05:06.361 user 0m32.565s 00:05:06.361 sys 1m3.047s 00:05:06.361 22:55:58 setup.sh -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:06.361 22:55:58 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:06.361 ************************************ 00:05:06.361 END TEST setup.sh 00:05:06.361 ************************************ 00:05:06.361 22:55:58 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:05:10.554 Hugepages 00:05:10.554 node hugesize free / total 00:05:10.554 node0 1048576kB 0 / 0 00:05:10.554 node0 2048kB 2048 / 2048 00:05:10.554 node1 1048576kB 0 / 0 00:05:10.554 node1 2048kB 0 / 0 00:05:10.554 00:05:10.554 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:10.554 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:05:10.554 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:05:10.554 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:05:10.554 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:05:10.554 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:05:10.554 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:05:10.554 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:05:10.554 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:05:10.554 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:05:10.554 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:05:10.554 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:05:10.554 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:05:10.554 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:05:10.554 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:05:10.554 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:05:10.554 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:05:10.554 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:05:10.554 22:56:02 -- spdk/autotest.sh@130 -- # uname -s 00:05:10.554 22:56:02 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:10.554 22:56:02 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:10.555 22:56:02 -- common/autotest_common.sh@1530 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:05:14.749 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:14.749 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:14.749 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:14.749 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:14.749 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:14.749 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:14.749 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:14.749 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:14.749 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:14.749 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:14.749 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:14.749 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:14.749 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:14.749 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:14.749 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:14.749 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:16.127 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:05:16.128 22:56:08 -- common/autotest_common.sh@1531 -- # sleep 1 00:05:17.064 22:56:09 -- common/autotest_common.sh@1532 -- # bdfs=() 00:05:17.064 22:56:09 -- common/autotest_common.sh@1532 -- # local bdfs 00:05:17.064 22:56:09 -- common/autotest_common.sh@1533 -- # bdfs=($(get_nvme_bdfs)) 00:05:17.064 22:56:09 -- common/autotest_common.sh@1533 -- # get_nvme_bdfs 00:05:17.064 22:56:09 -- common/autotest_common.sh@1512 -- # bdfs=() 00:05:17.064 22:56:09 -- common/autotest_common.sh@1512 -- # local bdfs 00:05:17.065 22:56:09 -- common/autotest_common.sh@1513 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:17.065 22:56:09 -- common/autotest_common.sh@1513 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:17.065 22:56:09 -- common/autotest_common.sh@1513 -- # jq -r '.config[].params.traddr' 00:05:17.324 22:56:09 -- common/autotest_common.sh@1514 -- # (( 1 == 0 )) 00:05:17.324 22:56:09 -- common/autotest_common.sh@1518 -- # printf '%s\n' 0000:d8:00.0 00:05:17.324 22:56:09 -- common/autotest_common.sh@1535 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:05:21.519 Waiting for block devices as requested 00:05:21.519 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:21.519 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:21.519 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:21.519 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:21.519 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:21.778 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:21.778 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:21.778 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:22.038 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:22.038 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:22.038 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:22.373 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:22.373 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:22.373 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:22.373 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:22.632 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:22.632 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:05:22.891 22:56:14 -- common/autotest_common.sh@1537 -- # for bdf in "${bdfs[@]}" 00:05:22.891 22:56:14 -- common/autotest_common.sh@1538 -- # get_nvme_ctrlr_from_bdf 0000:d8:00.0 00:05:22.891 22:56:14 -- common/autotest_common.sh@1501 -- # readlink -f /sys/class/nvme/nvme0 00:05:22.891 22:56:14 -- common/autotest_common.sh@1501 -- # grep 0000:d8:00.0/nvme/nvme 00:05:22.891 22:56:14 -- common/autotest_common.sh@1501 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:05:22.891 22:56:14 -- common/autotest_common.sh@1502 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 ]] 00:05:22.891 22:56:14 -- common/autotest_common.sh@1506 -- # basename /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:05:22.891 22:56:14 -- common/autotest_common.sh@1506 -- # printf '%s\n' nvme0 00:05:22.891 22:56:14 -- common/autotest_common.sh@1538 -- # nvme_ctrlr=/dev/nvme0 00:05:22.891 22:56:14 -- common/autotest_common.sh@1539 -- # [[ -z /dev/nvme0 ]] 00:05:22.891 22:56:14 -- common/autotest_common.sh@1544 -- # nvme id-ctrl /dev/nvme0 00:05:22.891 22:56:14 -- common/autotest_common.sh@1544 -- # grep oacs 00:05:22.891 22:56:14 -- common/autotest_common.sh@1544 -- # cut -d: -f2 00:05:22.891 22:56:15 -- common/autotest_common.sh@1544 -- # oacs=' 0xe' 00:05:22.891 22:56:15 -- common/autotest_common.sh@1545 -- # oacs_ns_manage=8 00:05:22.891 22:56:15 -- common/autotest_common.sh@1547 -- # [[ 8 -ne 0 ]] 00:05:22.891 22:56:15 -- common/autotest_common.sh@1553 -- # nvme id-ctrl /dev/nvme0 00:05:22.891 22:56:15 -- common/autotest_common.sh@1553 -- # grep unvmcap 00:05:22.891 22:56:15 -- common/autotest_common.sh@1553 -- # cut -d: -f2 00:05:22.891 22:56:15 -- common/autotest_common.sh@1553 -- # unvmcap=' 0' 00:05:22.891 22:56:15 -- common/autotest_common.sh@1554 -- # [[ 0 -eq 0 ]] 00:05:22.891 22:56:15 -- common/autotest_common.sh@1556 -- # continue 00:05:22.891 22:56:15 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:22.891 22:56:15 -- common/autotest_common.sh@729 -- # xtrace_disable 00:05:22.891 22:56:15 -- common/autotest_common.sh@10 -- # set +x 00:05:22.891 22:56:15 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:22.891 22:56:15 -- common/autotest_common.sh@723 -- # xtrace_disable 00:05:22.891 22:56:15 -- common/autotest_common.sh@10 -- # set +x 00:05:22.891 22:56:15 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:05:27.172 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:27.172 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:27.172 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:27.172 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:27.172 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:27.172 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:27.172 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:27.172 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:27.172 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:27.172 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:27.172 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:27.172 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:27.172 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:27.172 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:27.172 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:27.172 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:28.551 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:05:28.811 22:56:20 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:28.811 22:56:20 -- common/autotest_common.sh@729 -- # xtrace_disable 00:05:28.811 22:56:20 -- common/autotest_common.sh@10 -- # set +x 00:05:28.811 22:56:20 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:28.811 22:56:20 -- common/autotest_common.sh@1590 -- # mapfile -t bdfs 00:05:28.811 22:56:20 -- common/autotest_common.sh@1590 -- # get_nvme_bdfs_by_id 0x0a54 00:05:28.811 22:56:20 -- common/autotest_common.sh@1576 -- # bdfs=() 00:05:28.811 22:56:20 -- common/autotest_common.sh@1576 -- # local bdfs 00:05:28.811 22:56:20 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs 00:05:28.811 22:56:20 -- common/autotest_common.sh@1512 -- # bdfs=() 00:05:28.811 22:56:20 -- common/autotest_common.sh@1512 -- # local bdfs 00:05:28.811 22:56:20 -- common/autotest_common.sh@1513 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:28.811 22:56:20 -- common/autotest_common.sh@1513 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:28.811 22:56:20 -- common/autotest_common.sh@1513 -- # jq -r '.config[].params.traddr' 00:05:28.811 22:56:21 -- common/autotest_common.sh@1514 -- # (( 1 == 0 )) 00:05:28.811 22:56:21 -- common/autotest_common.sh@1518 -- # printf '%s\n' 0000:d8:00.0 00:05:28.811 22:56:21 -- common/autotest_common.sh@1578 -- # for bdf in $(get_nvme_bdfs) 00:05:29.071 22:56:21 -- common/autotest_common.sh@1579 -- # cat /sys/bus/pci/devices/0000:d8:00.0/device 00:05:29.071 22:56:21 -- common/autotest_common.sh@1579 -- # device=0x0a54 00:05:29.071 22:56:21 -- common/autotest_common.sh@1580 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:29.071 22:56:21 -- common/autotest_common.sh@1581 -- # bdfs+=($bdf) 00:05:29.071 22:56:21 -- common/autotest_common.sh@1585 -- # printf '%s\n' 0000:d8:00.0 00:05:29.071 22:56:21 -- common/autotest_common.sh@1591 -- # [[ -z 0000:d8:00.0 ]] 00:05:29.071 22:56:21 -- common/autotest_common.sh@1596 -- # spdk_tgt_pid=4123955 00:05:29.071 22:56:21 -- common/autotest_common.sh@1597 -- # waitforlisten 4123955 00:05:29.071 22:56:21 -- common/autotest_common.sh@1595 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:29.071 22:56:21 -- common/autotest_common.sh@830 -- # '[' -z 4123955 ']' 00:05:29.071 22:56:21 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.071 22:56:21 -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:29.071 22:56:21 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.071 22:56:21 -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:29.071 22:56:21 -- common/autotest_common.sh@10 -- # set +x 00:05:29.071 [2024-06-07 22:56:21.121375] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:05:29.071 [2024-06-07 22:56:21.121460] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4123955 ] 00:05:29.071 EAL: No free 2048 kB hugepages reported on node 1 00:05:29.071 [2024-06-07 22:56:21.238789] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.071 [2024-06-07 22:56:21.327361] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.008 22:56:22 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:30.008 22:56:22 -- common/autotest_common.sh@863 -- # return 0 00:05:30.008 22:56:22 -- common/autotest_common.sh@1599 -- # bdf_id=0 00:05:30.008 22:56:22 -- common/autotest_common.sh@1600 -- # for bdf in "${bdfs[@]}" 00:05:30.008 22:56:22 -- common/autotest_common.sh@1601 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:d8:00.0 00:05:33.293 nvme0n1 00:05:33.293 22:56:25 -- common/autotest_common.sh@1603 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:33.293 [2024-06-07 22:56:25.356481] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:05:33.293 request: 00:05:33.293 { 00:05:33.293 "nvme_ctrlr_name": "nvme0", 00:05:33.293 "password": "test", 00:05:33.293 "method": "bdev_nvme_opal_revert", 00:05:33.293 "req_id": 1 00:05:33.293 } 00:05:33.293 Got JSON-RPC error response 00:05:33.293 response: 00:05:33.293 { 00:05:33.293 "code": -32602, 00:05:33.293 "message": "Invalid parameters" 00:05:33.293 } 00:05:33.293 22:56:25 -- common/autotest_common.sh@1603 -- # true 00:05:33.293 22:56:25 -- common/autotest_common.sh@1604 -- # (( ++bdf_id )) 00:05:33.293 22:56:25 -- common/autotest_common.sh@1607 -- # killprocess 4123955 00:05:33.293 22:56:25 -- common/autotest_common.sh@949 -- # '[' -z 4123955 ']' 00:05:33.293 22:56:25 -- common/autotest_common.sh@953 -- # kill -0 4123955 00:05:33.293 22:56:25 -- common/autotest_common.sh@954 -- # uname 00:05:33.293 22:56:25 -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:33.293 22:56:25 -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4123955 00:05:33.293 22:56:25 -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:33.293 22:56:25 -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:33.293 22:56:25 -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4123955' 00:05:33.293 killing process with pid 4123955 00:05:33.293 22:56:25 -- common/autotest_common.sh@968 -- # kill 4123955 00:05:33.293 22:56:25 -- common/autotest_common.sh@973 -- # wait 4123955 00:05:35.827 22:56:27 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:35.827 22:56:27 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:35.827 22:56:27 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:35.827 22:56:27 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:35.827 22:56:27 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:35.827 22:56:27 -- common/autotest_common.sh@723 -- # xtrace_disable 00:05:35.827 22:56:27 -- common/autotest_common.sh@10 -- # set +x 00:05:35.827 22:56:27 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:35.827 22:56:27 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:05:35.827 22:56:27 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:35.827 22:56:27 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:35.827 22:56:27 -- common/autotest_common.sh@10 -- # set +x 00:05:35.827 ************************************ 00:05:35.827 START TEST env 00:05:35.827 ************************************ 00:05:35.827 22:56:27 env -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:05:35.827 * Looking for test storage... 00:05:35.827 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env 00:05:35.827 22:56:27 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:05:35.827 22:56:27 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:35.827 22:56:27 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:35.827 22:56:27 env -- common/autotest_common.sh@10 -- # set +x 00:05:35.827 ************************************ 00:05:35.827 START TEST env_memory 00:05:35.827 ************************************ 00:05:35.827 22:56:27 env.env_memory -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:05:35.827 00:05:35.827 00:05:35.827 CUnit - A unit testing framework for C - Version 2.1-3 00:05:35.827 http://cunit.sourceforge.net/ 00:05:35.827 00:05:35.827 00:05:35.827 Suite: memory 00:05:35.827 Test: alloc and free memory map ...[2024-06-07 22:56:27.878244] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:35.827 passed 00:05:35.827 Test: mem map translation ...[2024-06-07 22:56:27.896086] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 591:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:35.827 [2024-06-07 22:56:27.896107] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 591:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:35.827 [2024-06-07 22:56:27.896150] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:35.827 [2024-06-07 22:56:27.896162] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:35.827 passed 00:05:35.827 Test: mem map registration ...[2024-06-07 22:56:27.927058] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:35.827 [2024-06-07 22:56:27.927084] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:35.827 passed 00:05:35.827 Test: mem map adjacent registrations ...passed 00:05:35.827 00:05:35.827 Run Summary: Type Total Ran Passed Failed Inactive 00:05:35.827 suites 1 1 n/a 0 0 00:05:35.827 tests 4 4 4 0 0 00:05:35.827 asserts 152 152 152 0 n/a 00:05:35.827 00:05:35.827 Elapsed time = 0.114 seconds 00:05:35.827 00:05:35.827 real 0m0.128s 00:05:35.827 user 0m0.115s 00:05:35.827 sys 0m0.012s 00:05:35.828 22:56:27 env.env_memory -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:35.828 22:56:27 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:35.828 ************************************ 00:05:35.828 END TEST env_memory 00:05:35.828 ************************************ 00:05:35.828 22:56:28 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:35.828 22:56:28 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:35.828 22:56:28 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:35.828 22:56:28 env -- common/autotest_common.sh@10 -- # set +x 00:05:35.828 ************************************ 00:05:35.828 START TEST env_vtophys 00:05:35.828 ************************************ 00:05:35.828 22:56:28 env.env_vtophys -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:35.828 EAL: lib.eal log level changed from notice to debug 00:05:35.828 EAL: Detected lcore 0 as core 0 on socket 0 00:05:35.828 EAL: Detected lcore 1 as core 1 on socket 0 00:05:35.828 EAL: Detected lcore 2 as core 2 on socket 0 00:05:35.828 EAL: Detected lcore 3 as core 3 on socket 0 00:05:35.828 EAL: Detected lcore 4 as core 4 on socket 0 00:05:35.828 EAL: Detected lcore 5 as core 5 on socket 0 00:05:35.828 EAL: Detected lcore 6 as core 6 on socket 0 00:05:35.828 EAL: Detected lcore 7 as core 8 on socket 0 00:05:35.828 EAL: Detected lcore 8 as core 9 on socket 0 00:05:35.828 EAL: Detected lcore 9 as core 10 on socket 0 00:05:35.828 EAL: Detected lcore 10 as core 11 on socket 0 00:05:35.828 EAL: Detected lcore 11 as core 12 on socket 0 00:05:35.828 EAL: Detected lcore 12 as core 13 on socket 0 00:05:35.828 EAL: Detected lcore 13 as core 14 on socket 0 00:05:35.828 EAL: Detected lcore 14 as core 16 on socket 0 00:05:35.828 EAL: Detected lcore 15 as core 17 on socket 0 00:05:35.828 EAL: Detected lcore 16 as core 18 on socket 0 00:05:35.828 EAL: Detected lcore 17 as core 19 on socket 0 00:05:35.828 EAL: Detected lcore 18 as core 20 on socket 0 00:05:35.828 EAL: Detected lcore 19 as core 21 on socket 0 00:05:35.828 EAL: Detected lcore 20 as core 22 on socket 0 00:05:35.828 EAL: Detected lcore 21 as core 24 on socket 0 00:05:35.828 EAL: Detected lcore 22 as core 25 on socket 0 00:05:35.828 EAL: Detected lcore 23 as core 26 on socket 0 00:05:35.828 EAL: Detected lcore 24 as core 27 on socket 0 00:05:35.828 EAL: Detected lcore 25 as core 28 on socket 0 00:05:35.828 EAL: Detected lcore 26 as core 29 on socket 0 00:05:35.828 EAL: Detected lcore 27 as core 30 on socket 0 00:05:35.828 EAL: Detected lcore 28 as core 0 on socket 1 00:05:35.828 EAL: Detected lcore 29 as core 1 on socket 1 00:05:35.828 EAL: Detected lcore 30 as core 2 on socket 1 00:05:35.828 EAL: Detected lcore 31 as core 3 on socket 1 00:05:35.828 EAL: Detected lcore 32 as core 4 on socket 1 00:05:35.828 EAL: Detected lcore 33 as core 5 on socket 1 00:05:35.828 EAL: Detected lcore 34 as core 6 on socket 1 00:05:35.828 EAL: Detected lcore 35 as core 8 on socket 1 00:05:35.828 EAL: Detected lcore 36 as core 9 on socket 1 00:05:35.828 EAL: Detected lcore 37 as core 10 on socket 1 00:05:35.828 EAL: Detected lcore 38 as core 11 on socket 1 00:05:35.828 EAL: Detected lcore 39 as core 12 on socket 1 00:05:35.828 EAL: Detected lcore 40 as core 13 on socket 1 00:05:35.828 EAL: Detected lcore 41 as core 14 on socket 1 00:05:35.828 EAL: Detected lcore 42 as core 16 on socket 1 00:05:35.828 EAL: Detected lcore 43 as core 17 on socket 1 00:05:35.828 EAL: Detected lcore 44 as core 18 on socket 1 00:05:35.828 EAL: Detected lcore 45 as core 19 on socket 1 00:05:35.828 EAL: Detected lcore 46 as core 20 on socket 1 00:05:35.828 EAL: Detected lcore 47 as core 21 on socket 1 00:05:35.828 EAL: Detected lcore 48 as core 22 on socket 1 00:05:35.828 EAL: Detected lcore 49 as core 24 on socket 1 00:05:35.828 EAL: Detected lcore 50 as core 25 on socket 1 00:05:35.828 EAL: Detected lcore 51 as core 26 on socket 1 00:05:35.828 EAL: Detected lcore 52 as core 27 on socket 1 00:05:35.828 EAL: Detected lcore 53 as core 28 on socket 1 00:05:35.828 EAL: Detected lcore 54 as core 29 on socket 1 00:05:35.828 EAL: Detected lcore 55 as core 30 on socket 1 00:05:35.828 EAL: Detected lcore 56 as core 0 on socket 0 00:05:35.828 EAL: Detected lcore 57 as core 1 on socket 0 00:05:35.828 EAL: Detected lcore 58 as core 2 on socket 0 00:05:35.828 EAL: Detected lcore 59 as core 3 on socket 0 00:05:35.828 EAL: Detected lcore 60 as core 4 on socket 0 00:05:35.828 EAL: Detected lcore 61 as core 5 on socket 0 00:05:35.828 EAL: Detected lcore 62 as core 6 on socket 0 00:05:35.828 EAL: Detected lcore 63 as core 8 on socket 0 00:05:35.828 EAL: Detected lcore 64 as core 9 on socket 0 00:05:35.828 EAL: Detected lcore 65 as core 10 on socket 0 00:05:35.828 EAL: Detected lcore 66 as core 11 on socket 0 00:05:35.828 EAL: Detected lcore 67 as core 12 on socket 0 00:05:35.828 EAL: Detected lcore 68 as core 13 on socket 0 00:05:35.828 EAL: Detected lcore 69 as core 14 on socket 0 00:05:35.828 EAL: Detected lcore 70 as core 16 on socket 0 00:05:35.828 EAL: Detected lcore 71 as core 17 on socket 0 00:05:35.828 EAL: Detected lcore 72 as core 18 on socket 0 00:05:35.828 EAL: Detected lcore 73 as core 19 on socket 0 00:05:35.828 EAL: Detected lcore 74 as core 20 on socket 0 00:05:35.828 EAL: Detected lcore 75 as core 21 on socket 0 00:05:35.828 EAL: Detected lcore 76 as core 22 on socket 0 00:05:35.828 EAL: Detected lcore 77 as core 24 on socket 0 00:05:35.828 EAL: Detected lcore 78 as core 25 on socket 0 00:05:35.828 EAL: Detected lcore 79 as core 26 on socket 0 00:05:35.828 EAL: Detected lcore 80 as core 27 on socket 0 00:05:35.828 EAL: Detected lcore 81 as core 28 on socket 0 00:05:35.828 EAL: Detected lcore 82 as core 29 on socket 0 00:05:35.828 EAL: Detected lcore 83 as core 30 on socket 0 00:05:35.828 EAL: Detected lcore 84 as core 0 on socket 1 00:05:35.828 EAL: Detected lcore 85 as core 1 on socket 1 00:05:35.828 EAL: Detected lcore 86 as core 2 on socket 1 00:05:35.828 EAL: Detected lcore 87 as core 3 on socket 1 00:05:35.828 EAL: Detected lcore 88 as core 4 on socket 1 00:05:35.828 EAL: Detected lcore 89 as core 5 on socket 1 00:05:35.828 EAL: Detected lcore 90 as core 6 on socket 1 00:05:35.828 EAL: Detected lcore 91 as core 8 on socket 1 00:05:35.828 EAL: Detected lcore 92 as core 9 on socket 1 00:05:35.828 EAL: Detected lcore 93 as core 10 on socket 1 00:05:35.828 EAL: Detected lcore 94 as core 11 on socket 1 00:05:35.828 EAL: Detected lcore 95 as core 12 on socket 1 00:05:35.828 EAL: Detected lcore 96 as core 13 on socket 1 00:05:35.828 EAL: Detected lcore 97 as core 14 on socket 1 00:05:35.828 EAL: Detected lcore 98 as core 16 on socket 1 00:05:35.828 EAL: Detected lcore 99 as core 17 on socket 1 00:05:35.828 EAL: Detected lcore 100 as core 18 on socket 1 00:05:35.828 EAL: Detected lcore 101 as core 19 on socket 1 00:05:35.828 EAL: Detected lcore 102 as core 20 on socket 1 00:05:35.828 EAL: Detected lcore 103 as core 21 on socket 1 00:05:35.828 EAL: Detected lcore 104 as core 22 on socket 1 00:05:35.828 EAL: Detected lcore 105 as core 24 on socket 1 00:05:35.828 EAL: Detected lcore 106 as core 25 on socket 1 00:05:35.828 EAL: Detected lcore 107 as core 26 on socket 1 00:05:35.828 EAL: Detected lcore 108 as core 27 on socket 1 00:05:35.828 EAL: Detected lcore 109 as core 28 on socket 1 00:05:35.828 EAL: Detected lcore 110 as core 29 on socket 1 00:05:35.828 EAL: Detected lcore 111 as core 30 on socket 1 00:05:35.828 EAL: Maximum logical cores by configuration: 128 00:05:35.828 EAL: Detected CPU lcores: 112 00:05:35.828 EAL: Detected NUMA nodes: 2 00:05:35.828 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:35.828 EAL: Checking presence of .so 'librte_eal.so.24' 00:05:35.828 EAL: Checking presence of .so 'librte_eal.so' 00:05:35.828 EAL: Detected static linkage of DPDK 00:05:35.828 EAL: No shared files mode enabled, IPC will be disabled 00:05:36.087 EAL: Bus pci wants IOVA as 'DC' 00:05:36.087 EAL: Buses did not request a specific IOVA mode. 00:05:36.087 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:36.087 EAL: Selected IOVA mode 'VA' 00:05:36.087 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.087 EAL: Probing VFIO support... 00:05:36.087 EAL: IOMMU type 1 (Type 1) is supported 00:05:36.087 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:36.087 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:36.087 EAL: VFIO support initialized 00:05:36.087 EAL: Ask a virtual area of 0x2e000 bytes 00:05:36.087 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:36.087 EAL: Setting up physically contiguous memory... 00:05:36.087 EAL: Setting maximum number of open files to 524288 00:05:36.087 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:36.087 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:36.087 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:36.087 EAL: Ask a virtual area of 0x61000 bytes 00:05:36.087 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:36.087 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:36.087 EAL: Ask a virtual area of 0x400000000 bytes 00:05:36.087 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:36.087 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:36.087 EAL: Ask a virtual area of 0x61000 bytes 00:05:36.087 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:36.087 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:36.087 EAL: Ask a virtual area of 0x400000000 bytes 00:05:36.087 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:36.087 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:36.087 EAL: Ask a virtual area of 0x61000 bytes 00:05:36.087 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:36.087 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:36.087 EAL: Ask a virtual area of 0x400000000 bytes 00:05:36.087 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:36.087 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:36.087 EAL: Ask a virtual area of 0x61000 bytes 00:05:36.087 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:36.087 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:36.087 EAL: Ask a virtual area of 0x400000000 bytes 00:05:36.087 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:36.087 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:36.087 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:36.087 EAL: Ask a virtual area of 0x61000 bytes 00:05:36.087 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:36.087 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:36.087 EAL: Ask a virtual area of 0x400000000 bytes 00:05:36.087 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:36.087 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:36.087 EAL: Ask a virtual area of 0x61000 bytes 00:05:36.087 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:36.087 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:36.087 EAL: Ask a virtual area of 0x400000000 bytes 00:05:36.087 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:36.087 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:36.087 EAL: Ask a virtual area of 0x61000 bytes 00:05:36.087 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:36.087 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:36.087 EAL: Ask a virtual area of 0x400000000 bytes 00:05:36.087 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:36.087 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:36.087 EAL: Ask a virtual area of 0x61000 bytes 00:05:36.087 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:36.087 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:36.087 EAL: Ask a virtual area of 0x400000000 bytes 00:05:36.087 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:36.087 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:36.087 EAL: Hugepages will be freed exactly as allocated. 00:05:36.087 EAL: No shared files mode enabled, IPC is disabled 00:05:36.087 EAL: No shared files mode enabled, IPC is disabled 00:05:36.087 EAL: TSC frequency is ~2500000 KHz 00:05:36.087 EAL: Main lcore 0 is ready (tid=7f0df2536a00;cpuset=[0]) 00:05:36.087 EAL: Trying to obtain current memory policy. 00:05:36.087 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:36.087 EAL: Restoring previous memory policy: 0 00:05:36.087 EAL: request: mp_malloc_sync 00:05:36.087 EAL: No shared files mode enabled, IPC is disabled 00:05:36.087 EAL: Heap on socket 0 was expanded by 2MB 00:05:36.087 EAL: No shared files mode enabled, IPC is disabled 00:05:36.087 EAL: Mem event callback 'spdk:(nil)' registered 00:05:36.087 00:05:36.087 00:05:36.087 CUnit - A unit testing framework for C - Version 2.1-3 00:05:36.087 http://cunit.sourceforge.net/ 00:05:36.087 00:05:36.087 00:05:36.087 Suite: components_suite 00:05:36.087 Test: vtophys_malloc_test ...passed 00:05:36.087 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:36.087 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:36.087 EAL: Restoring previous memory policy: 4 00:05:36.087 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.087 EAL: request: mp_malloc_sync 00:05:36.087 EAL: No shared files mode enabled, IPC is disabled 00:05:36.087 EAL: Heap on socket 0 was expanded by 4MB 00:05:36.087 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.087 EAL: request: mp_malloc_sync 00:05:36.087 EAL: No shared files mode enabled, IPC is disabled 00:05:36.087 EAL: Heap on socket 0 was shrunk by 4MB 00:05:36.087 EAL: Trying to obtain current memory policy. 00:05:36.087 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:36.087 EAL: Restoring previous memory policy: 4 00:05:36.087 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.087 EAL: request: mp_malloc_sync 00:05:36.087 EAL: No shared files mode enabled, IPC is disabled 00:05:36.087 EAL: Heap on socket 0 was expanded by 6MB 00:05:36.087 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.087 EAL: request: mp_malloc_sync 00:05:36.087 EAL: No shared files mode enabled, IPC is disabled 00:05:36.087 EAL: Heap on socket 0 was shrunk by 6MB 00:05:36.087 EAL: Trying to obtain current memory policy. 00:05:36.087 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:36.087 EAL: Restoring previous memory policy: 4 00:05:36.087 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.087 EAL: request: mp_malloc_sync 00:05:36.087 EAL: No shared files mode enabled, IPC is disabled 00:05:36.087 EAL: Heap on socket 0 was expanded by 10MB 00:05:36.087 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.087 EAL: request: mp_malloc_sync 00:05:36.087 EAL: No shared files mode enabled, IPC is disabled 00:05:36.087 EAL: Heap on socket 0 was shrunk by 10MB 00:05:36.087 EAL: Trying to obtain current memory policy. 00:05:36.087 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:36.087 EAL: Restoring previous memory policy: 4 00:05:36.087 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.087 EAL: request: mp_malloc_sync 00:05:36.088 EAL: No shared files mode enabled, IPC is disabled 00:05:36.088 EAL: Heap on socket 0 was expanded by 18MB 00:05:36.088 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.088 EAL: request: mp_malloc_sync 00:05:36.088 EAL: No shared files mode enabled, IPC is disabled 00:05:36.088 EAL: Heap on socket 0 was shrunk by 18MB 00:05:36.088 EAL: Trying to obtain current memory policy. 00:05:36.088 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:36.088 EAL: Restoring previous memory policy: 4 00:05:36.088 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.088 EAL: request: mp_malloc_sync 00:05:36.088 EAL: No shared files mode enabled, IPC is disabled 00:05:36.088 EAL: Heap on socket 0 was expanded by 34MB 00:05:36.088 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.088 EAL: request: mp_malloc_sync 00:05:36.088 EAL: No shared files mode enabled, IPC is disabled 00:05:36.088 EAL: Heap on socket 0 was shrunk by 34MB 00:05:36.088 EAL: Trying to obtain current memory policy. 00:05:36.088 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:36.088 EAL: Restoring previous memory policy: 4 00:05:36.088 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.088 EAL: request: mp_malloc_sync 00:05:36.088 EAL: No shared files mode enabled, IPC is disabled 00:05:36.088 EAL: Heap on socket 0 was expanded by 66MB 00:05:36.088 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.088 EAL: request: mp_malloc_sync 00:05:36.088 EAL: No shared files mode enabled, IPC is disabled 00:05:36.088 EAL: Heap on socket 0 was shrunk by 66MB 00:05:36.088 EAL: Trying to obtain current memory policy. 00:05:36.088 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:36.088 EAL: Restoring previous memory policy: 4 00:05:36.088 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.088 EAL: request: mp_malloc_sync 00:05:36.088 EAL: No shared files mode enabled, IPC is disabled 00:05:36.088 EAL: Heap on socket 0 was expanded by 130MB 00:05:36.088 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.088 EAL: request: mp_malloc_sync 00:05:36.088 EAL: No shared files mode enabled, IPC is disabled 00:05:36.088 EAL: Heap on socket 0 was shrunk by 130MB 00:05:36.088 EAL: Trying to obtain current memory policy. 00:05:36.088 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:36.088 EAL: Restoring previous memory policy: 4 00:05:36.088 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.088 EAL: request: mp_malloc_sync 00:05:36.088 EAL: No shared files mode enabled, IPC is disabled 00:05:36.088 EAL: Heap on socket 0 was expanded by 258MB 00:05:36.346 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.346 EAL: request: mp_malloc_sync 00:05:36.346 EAL: No shared files mode enabled, IPC is disabled 00:05:36.346 EAL: Heap on socket 0 was shrunk by 258MB 00:05:36.346 EAL: Trying to obtain current memory policy. 00:05:36.346 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:36.346 EAL: Restoring previous memory policy: 4 00:05:36.346 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.346 EAL: request: mp_malloc_sync 00:05:36.346 EAL: No shared files mode enabled, IPC is disabled 00:05:36.346 EAL: Heap on socket 0 was expanded by 514MB 00:05:36.346 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.604 EAL: request: mp_malloc_sync 00:05:36.604 EAL: No shared files mode enabled, IPC is disabled 00:05:36.604 EAL: Heap on socket 0 was shrunk by 514MB 00:05:36.604 EAL: Trying to obtain current memory policy. 00:05:36.604 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:36.862 EAL: Restoring previous memory policy: 4 00:05:36.862 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.862 EAL: request: mp_malloc_sync 00:05:36.862 EAL: No shared files mode enabled, IPC is disabled 00:05:36.862 EAL: Heap on socket 0 was expanded by 1026MB 00:05:36.862 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.120 EAL: request: mp_malloc_sync 00:05:37.120 EAL: No shared files mode enabled, IPC is disabled 00:05:37.120 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:37.120 passed 00:05:37.120 00:05:37.120 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.120 suites 1 1 n/a 0 0 00:05:37.120 tests 2 2 2 0 0 00:05:37.120 asserts 497 497 497 0 n/a 00:05:37.120 00:05:37.120 Elapsed time = 1.010 seconds 00:05:37.120 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.120 EAL: request: mp_malloc_sync 00:05:37.120 EAL: No shared files mode enabled, IPC is disabled 00:05:37.120 EAL: Heap on socket 0 was shrunk by 2MB 00:05:37.120 EAL: No shared files mode enabled, IPC is disabled 00:05:37.120 EAL: No shared files mode enabled, IPC is disabled 00:05:37.120 EAL: No shared files mode enabled, IPC is disabled 00:05:37.120 00:05:37.120 real 0m1.182s 00:05:37.120 user 0m0.664s 00:05:37.120 sys 0m0.486s 00:05:37.120 22:56:29 env.env_vtophys -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:37.120 22:56:29 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:37.120 ************************************ 00:05:37.120 END TEST env_vtophys 00:05:37.120 ************************************ 00:05:37.120 22:56:29 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:05:37.120 22:56:29 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:37.120 22:56:29 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:37.120 22:56:29 env -- common/autotest_common.sh@10 -- # set +x 00:05:37.120 ************************************ 00:05:37.120 START TEST env_pci 00:05:37.120 ************************************ 00:05:37.120 22:56:29 env.env_pci -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:05:37.120 00:05:37.120 00:05:37.120 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.120 http://cunit.sourceforge.net/ 00:05:37.120 00:05:37.120 00:05:37.120 Suite: pci 00:05:37.120 Test: pci_hook ...[2024-06-07 22:56:29.312595] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/pci.c:1041:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 4125491 has claimed it 00:05:37.120 EAL: Cannot find device (10000:00:01.0) 00:05:37.120 EAL: Failed to attach device on primary process 00:05:37.120 passed 00:05:37.120 00:05:37.120 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.120 suites 1 1 n/a 0 0 00:05:37.120 tests 1 1 1 0 0 00:05:37.120 asserts 25 25 25 0 n/a 00:05:37.120 00:05:37.120 Elapsed time = 0.050 seconds 00:05:37.120 00:05:37.120 real 0m0.070s 00:05:37.120 user 0m0.015s 00:05:37.120 sys 0m0.055s 00:05:37.120 22:56:29 env.env_pci -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:37.120 22:56:29 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:37.120 ************************************ 00:05:37.120 END TEST env_pci 00:05:37.120 ************************************ 00:05:37.378 22:56:29 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:37.378 22:56:29 env -- env/env.sh@15 -- # uname 00:05:37.378 22:56:29 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:37.378 22:56:29 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:37.378 22:56:29 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:37.378 22:56:29 env -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:05:37.378 22:56:29 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:37.378 22:56:29 env -- common/autotest_common.sh@10 -- # set +x 00:05:37.378 ************************************ 00:05:37.378 START TEST env_dpdk_post_init 00:05:37.378 ************************************ 00:05:37.378 22:56:29 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:37.378 EAL: Detected CPU lcores: 112 00:05:37.378 EAL: Detected NUMA nodes: 2 00:05:37.378 EAL: Detected static linkage of DPDK 00:05:37.378 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:37.378 EAL: Selected IOVA mode 'VA' 00:05:37.378 EAL: No free 2048 kB hugepages reported on node 1 00:05:37.378 EAL: VFIO support initialized 00:05:37.378 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:37.378 EAL: Using IOMMU type 1 (Type 1) 00:05:38.313 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:d8:00.0 (socket 1) 00:05:42.506 EAL: Releasing PCI mapped resource for 0000:d8:00.0 00:05:42.506 EAL: Calling pci_unmap_resource for 0000:d8:00.0 at 0x202001000000 00:05:42.506 Starting DPDK initialization... 00:05:42.506 Starting SPDK post initialization... 00:05:42.506 SPDK NVMe probe 00:05:42.506 Attaching to 0000:d8:00.0 00:05:42.506 Attached to 0000:d8:00.0 00:05:42.506 Cleaning up... 00:05:42.506 00:05:42.506 real 0m4.805s 00:05:42.506 user 0m3.561s 00:05:42.506 sys 0m0.489s 00:05:42.506 22:56:34 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:42.506 22:56:34 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:42.506 ************************************ 00:05:42.506 END TEST env_dpdk_post_init 00:05:42.506 ************************************ 00:05:42.506 22:56:34 env -- env/env.sh@26 -- # uname 00:05:42.506 22:56:34 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:42.506 22:56:34 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:42.506 22:56:34 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:42.506 22:56:34 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:42.506 22:56:34 env -- common/autotest_common.sh@10 -- # set +x 00:05:42.506 ************************************ 00:05:42.506 START TEST env_mem_callbacks 00:05:42.506 ************************************ 00:05:42.506 22:56:34 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:42.506 EAL: Detected CPU lcores: 112 00:05:42.506 EAL: Detected NUMA nodes: 2 00:05:42.506 EAL: Detected static linkage of DPDK 00:05:42.506 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:42.506 EAL: Selected IOVA mode 'VA' 00:05:42.506 EAL: No free 2048 kB hugepages reported on node 1 00:05:42.506 EAL: VFIO support initialized 00:05:42.506 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:42.506 00:05:42.506 00:05:42.506 CUnit - A unit testing framework for C - Version 2.1-3 00:05:42.506 http://cunit.sourceforge.net/ 00:05:42.506 00:05:42.506 00:05:42.506 Suite: memory 00:05:42.506 Test: test ... 00:05:42.506 register 0x200000200000 2097152 00:05:42.506 malloc 3145728 00:05:42.506 register 0x200000400000 4194304 00:05:42.506 buf 0x200000500000 len 3145728 PASSED 00:05:42.506 malloc 64 00:05:42.506 buf 0x2000004fff40 len 64 PASSED 00:05:42.506 malloc 4194304 00:05:42.506 register 0x200000800000 6291456 00:05:42.506 buf 0x200000a00000 len 4194304 PASSED 00:05:42.506 free 0x200000500000 3145728 00:05:42.506 free 0x2000004fff40 64 00:05:42.506 unregister 0x200000400000 4194304 PASSED 00:05:42.506 free 0x200000a00000 4194304 00:05:42.506 unregister 0x200000800000 6291456 PASSED 00:05:42.506 malloc 8388608 00:05:42.506 register 0x200000400000 10485760 00:05:42.506 buf 0x200000600000 len 8388608 PASSED 00:05:42.506 free 0x200000600000 8388608 00:05:42.506 unregister 0x200000400000 10485760 PASSED 00:05:42.506 passed 00:05:42.506 00:05:42.506 Run Summary: Type Total Ran Passed Failed Inactive 00:05:42.506 suites 1 1 n/a 0 0 00:05:42.506 tests 1 1 1 0 0 00:05:42.506 asserts 15 15 15 0 n/a 00:05:42.506 00:05:42.506 Elapsed time = 0.008 seconds 00:05:42.506 00:05:42.506 real 0m0.086s 00:05:42.506 user 0m0.027s 00:05:42.506 sys 0m0.059s 00:05:42.506 22:56:34 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:42.506 22:56:34 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:42.506 ************************************ 00:05:42.506 END TEST env_mem_callbacks 00:05:42.506 ************************************ 00:05:42.506 00:05:42.506 real 0m6.752s 00:05:42.506 user 0m4.553s 00:05:42.506 sys 0m1.449s 00:05:42.506 22:56:34 env -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:42.506 22:56:34 env -- common/autotest_common.sh@10 -- # set +x 00:05:42.506 ************************************ 00:05:42.506 END TEST env 00:05:42.506 ************************************ 00:05:42.506 22:56:34 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:05:42.506 22:56:34 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:42.506 22:56:34 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:42.506 22:56:34 -- common/autotest_common.sh@10 -- # set +x 00:05:42.506 ************************************ 00:05:42.506 START TEST rpc 00:05:42.506 ************************************ 00:05:42.506 22:56:34 rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:05:42.506 * Looking for test storage... 00:05:42.506 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:05:42.506 22:56:34 rpc -- rpc/rpc.sh@65 -- # spdk_pid=4126409 00:05:42.507 22:56:34 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:42.507 22:56:34 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:42.507 22:56:34 rpc -- rpc/rpc.sh@67 -- # waitforlisten 4126409 00:05:42.507 22:56:34 rpc -- common/autotest_common.sh@830 -- # '[' -z 4126409 ']' 00:05:42.507 22:56:34 rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.507 22:56:34 rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:42.507 22:56:34 rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.507 22:56:34 rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:42.507 22:56:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.507 [2024-06-07 22:56:34.696515] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:05:42.507 [2024-06-07 22:56:34.696586] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4126409 ] 00:05:42.507 EAL: No free 2048 kB hugepages reported on node 1 00:05:42.766 [2024-06-07 22:56:34.811787] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.766 [2024-06-07 22:56:34.901440] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:42.766 [2024-06-07 22:56:34.901485] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 4126409' to capture a snapshot of events at runtime. 00:05:42.766 [2024-06-07 22:56:34.901499] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:42.766 [2024-06-07 22:56:34.901511] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:42.766 [2024-06-07 22:56:34.901520] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid4126409 for offline analysis/debug. 00:05:42.766 [2024-06-07 22:56:34.901546] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.778 22:56:35 rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:43.778 22:56:35 rpc -- common/autotest_common.sh@863 -- # return 0 00:05:43.778 22:56:35 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:05:43.778 22:56:35 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:05:43.778 22:56:35 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:43.778 22:56:35 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:43.778 22:56:35 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:43.778 22:56:35 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:43.778 22:56:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.778 ************************************ 00:05:43.778 START TEST rpc_integrity 00:05:43.778 ************************************ 00:05:43.778 22:56:35 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # rpc_integrity 00:05:43.778 22:56:35 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:43.778 22:56:35 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:43.778 22:56:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:43.778 22:56:35 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:43.778 22:56:35 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:43.778 22:56:35 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:43.778 22:56:35 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:43.778 22:56:35 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:43.778 22:56:35 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:43.778 22:56:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:43.778 22:56:35 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:43.778 22:56:35 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:43.778 22:56:35 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:43.778 22:56:35 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:43.778 22:56:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:43.778 22:56:35 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:43.778 22:56:35 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:43.778 { 00:05:43.778 "name": "Malloc0", 00:05:43.778 "aliases": [ 00:05:43.778 "6e67bd04-c54a-4dc1-8504-e618bffc7838" 00:05:43.778 ], 00:05:43.778 "product_name": "Malloc disk", 00:05:43.778 "block_size": 512, 00:05:43.778 "num_blocks": 16384, 00:05:43.778 "uuid": "6e67bd04-c54a-4dc1-8504-e618bffc7838", 00:05:43.778 "assigned_rate_limits": { 00:05:43.778 "rw_ios_per_sec": 0, 00:05:43.778 "rw_mbytes_per_sec": 0, 00:05:43.778 "r_mbytes_per_sec": 0, 00:05:43.778 "w_mbytes_per_sec": 0 00:05:43.778 }, 00:05:43.778 "claimed": false, 00:05:43.778 "zoned": false, 00:05:43.778 "supported_io_types": { 00:05:43.778 "read": true, 00:05:43.778 "write": true, 00:05:43.778 "unmap": true, 00:05:43.778 "write_zeroes": true, 00:05:43.778 "flush": true, 00:05:43.778 "reset": true, 00:05:43.778 "compare": false, 00:05:43.778 "compare_and_write": false, 00:05:43.778 "abort": true, 00:05:43.778 "nvme_admin": false, 00:05:43.778 "nvme_io": false 00:05:43.778 }, 00:05:43.778 "memory_domains": [ 00:05:43.778 { 00:05:43.778 "dma_device_id": "system", 00:05:43.778 "dma_device_type": 1 00:05:43.778 }, 00:05:43.778 { 00:05:43.778 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:43.778 "dma_device_type": 2 00:05:43.778 } 00:05:43.778 ], 00:05:43.778 "driver_specific": {} 00:05:43.778 } 00:05:43.778 ]' 00:05:43.778 22:56:35 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:43.778 22:56:35 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:43.778 22:56:35 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:43.778 22:56:35 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:43.778 22:56:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:43.778 [2024-06-07 22:56:35.809140] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:43.778 [2024-06-07 22:56:35.809181] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:43.778 [2024-06-07 22:56:35.809204] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x599aea0 00:05:43.778 [2024-06-07 22:56:35.809218] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:43.778 [2024-06-07 22:56:35.810298] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:43.778 [2024-06-07 22:56:35.810325] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:43.778 Passthru0 00:05:43.778 22:56:35 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:43.778 22:56:35 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:43.778 22:56:35 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:43.778 22:56:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:43.778 22:56:35 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:43.778 22:56:35 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:43.778 { 00:05:43.778 "name": "Malloc0", 00:05:43.778 "aliases": [ 00:05:43.778 "6e67bd04-c54a-4dc1-8504-e618bffc7838" 00:05:43.778 ], 00:05:43.778 "product_name": "Malloc disk", 00:05:43.778 "block_size": 512, 00:05:43.778 "num_blocks": 16384, 00:05:43.778 "uuid": "6e67bd04-c54a-4dc1-8504-e618bffc7838", 00:05:43.778 "assigned_rate_limits": { 00:05:43.778 "rw_ios_per_sec": 0, 00:05:43.778 "rw_mbytes_per_sec": 0, 00:05:43.778 "r_mbytes_per_sec": 0, 00:05:43.778 "w_mbytes_per_sec": 0 00:05:43.778 }, 00:05:43.778 "claimed": true, 00:05:43.778 "claim_type": "exclusive_write", 00:05:43.778 "zoned": false, 00:05:43.778 "supported_io_types": { 00:05:43.778 "read": true, 00:05:43.778 "write": true, 00:05:43.778 "unmap": true, 00:05:43.778 "write_zeroes": true, 00:05:43.778 "flush": true, 00:05:43.778 "reset": true, 00:05:43.778 "compare": false, 00:05:43.778 "compare_and_write": false, 00:05:43.778 "abort": true, 00:05:43.778 "nvme_admin": false, 00:05:43.778 "nvme_io": false 00:05:43.778 }, 00:05:43.778 "memory_domains": [ 00:05:43.778 { 00:05:43.778 "dma_device_id": "system", 00:05:43.778 "dma_device_type": 1 00:05:43.778 }, 00:05:43.778 { 00:05:43.778 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:43.778 "dma_device_type": 2 00:05:43.778 } 00:05:43.778 ], 00:05:43.778 "driver_specific": {} 00:05:43.778 }, 00:05:43.778 { 00:05:43.778 "name": "Passthru0", 00:05:43.778 "aliases": [ 00:05:43.778 "bd679c9d-195f-520b-b315-882b38ab7576" 00:05:43.778 ], 00:05:43.778 "product_name": "passthru", 00:05:43.778 "block_size": 512, 00:05:43.778 "num_blocks": 16384, 00:05:43.778 "uuid": "bd679c9d-195f-520b-b315-882b38ab7576", 00:05:43.778 "assigned_rate_limits": { 00:05:43.778 "rw_ios_per_sec": 0, 00:05:43.778 "rw_mbytes_per_sec": 0, 00:05:43.778 "r_mbytes_per_sec": 0, 00:05:43.778 "w_mbytes_per_sec": 0 00:05:43.778 }, 00:05:43.778 "claimed": false, 00:05:43.778 "zoned": false, 00:05:43.778 "supported_io_types": { 00:05:43.778 "read": true, 00:05:43.778 "write": true, 00:05:43.778 "unmap": true, 00:05:43.778 "write_zeroes": true, 00:05:43.778 "flush": true, 00:05:43.778 "reset": true, 00:05:43.778 "compare": false, 00:05:43.778 "compare_and_write": false, 00:05:43.778 "abort": true, 00:05:43.778 "nvme_admin": false, 00:05:43.778 "nvme_io": false 00:05:43.778 }, 00:05:43.778 "memory_domains": [ 00:05:43.778 { 00:05:43.778 "dma_device_id": "system", 00:05:43.778 "dma_device_type": 1 00:05:43.778 }, 00:05:43.778 { 00:05:43.778 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:43.778 "dma_device_type": 2 00:05:43.778 } 00:05:43.778 ], 00:05:43.778 "driver_specific": { 00:05:43.778 "passthru": { 00:05:43.778 "name": "Passthru0", 00:05:43.778 "base_bdev_name": "Malloc0" 00:05:43.778 } 00:05:43.778 } 00:05:43.778 } 00:05:43.778 ]' 00:05:43.778 22:56:35 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:43.778 22:56:35 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:43.778 22:56:35 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:43.778 22:56:35 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:43.778 22:56:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:43.778 22:56:35 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:43.778 22:56:35 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:43.778 22:56:35 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:43.778 22:56:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:43.778 22:56:35 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:43.778 22:56:35 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:43.778 22:56:35 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:43.778 22:56:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:43.778 22:56:35 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:43.778 22:56:35 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:43.778 22:56:35 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:43.778 22:56:35 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:43.778 00:05:43.778 real 0m0.307s 00:05:43.779 user 0m0.187s 00:05:43.779 sys 0m0.049s 00:05:43.779 22:56:35 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:43.779 22:56:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:43.779 ************************************ 00:05:43.779 END TEST rpc_integrity 00:05:43.779 ************************************ 00:05:43.779 22:56:36 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:43.779 22:56:36 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:43.779 22:56:36 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:43.779 22:56:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.779 ************************************ 00:05:43.779 START TEST rpc_plugins 00:05:43.779 ************************************ 00:05:43.779 22:56:36 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # rpc_plugins 00:05:43.779 22:56:36 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:43.779 22:56:36 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:43.779 22:56:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:44.038 22:56:36 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:44.038 22:56:36 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:44.038 22:56:36 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:44.038 22:56:36 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:44.038 22:56:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:44.038 22:56:36 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:44.038 22:56:36 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:44.038 { 00:05:44.038 "name": "Malloc1", 00:05:44.038 "aliases": [ 00:05:44.038 "0b43edad-84f8-4348-8cbe-b05b924d3461" 00:05:44.038 ], 00:05:44.038 "product_name": "Malloc disk", 00:05:44.038 "block_size": 4096, 00:05:44.038 "num_blocks": 256, 00:05:44.038 "uuid": "0b43edad-84f8-4348-8cbe-b05b924d3461", 00:05:44.038 "assigned_rate_limits": { 00:05:44.038 "rw_ios_per_sec": 0, 00:05:44.038 "rw_mbytes_per_sec": 0, 00:05:44.038 "r_mbytes_per_sec": 0, 00:05:44.038 "w_mbytes_per_sec": 0 00:05:44.038 }, 00:05:44.038 "claimed": false, 00:05:44.038 "zoned": false, 00:05:44.038 "supported_io_types": { 00:05:44.038 "read": true, 00:05:44.038 "write": true, 00:05:44.038 "unmap": true, 00:05:44.038 "write_zeroes": true, 00:05:44.038 "flush": true, 00:05:44.038 "reset": true, 00:05:44.038 "compare": false, 00:05:44.038 "compare_and_write": false, 00:05:44.038 "abort": true, 00:05:44.038 "nvme_admin": false, 00:05:44.038 "nvme_io": false 00:05:44.038 }, 00:05:44.038 "memory_domains": [ 00:05:44.038 { 00:05:44.038 "dma_device_id": "system", 00:05:44.038 "dma_device_type": 1 00:05:44.038 }, 00:05:44.038 { 00:05:44.038 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:44.038 "dma_device_type": 2 00:05:44.038 } 00:05:44.038 ], 00:05:44.038 "driver_specific": {} 00:05:44.038 } 00:05:44.038 ]' 00:05:44.038 22:56:36 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:44.038 22:56:36 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:44.038 22:56:36 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:44.038 22:56:36 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:44.038 22:56:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:44.038 22:56:36 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:44.038 22:56:36 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:44.038 22:56:36 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:44.038 22:56:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:44.038 22:56:36 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:44.038 22:56:36 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:44.038 22:56:36 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:44.038 22:56:36 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:44.038 00:05:44.038 real 0m0.143s 00:05:44.038 user 0m0.088s 00:05:44.038 sys 0m0.022s 00:05:44.038 22:56:36 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:44.038 22:56:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:44.038 ************************************ 00:05:44.038 END TEST rpc_plugins 00:05:44.038 ************************************ 00:05:44.038 22:56:36 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:44.038 22:56:36 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:44.038 22:56:36 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:44.038 22:56:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.038 ************************************ 00:05:44.038 START TEST rpc_trace_cmd_test 00:05:44.038 ************************************ 00:05:44.038 22:56:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # rpc_trace_cmd_test 00:05:44.039 22:56:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:44.039 22:56:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:44.039 22:56:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:44.039 22:56:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:44.039 22:56:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:44.039 22:56:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:44.039 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid4126409", 00:05:44.039 "tpoint_group_mask": "0x8", 00:05:44.039 "iscsi_conn": { 00:05:44.039 "mask": "0x2", 00:05:44.039 "tpoint_mask": "0x0" 00:05:44.039 }, 00:05:44.039 "scsi": { 00:05:44.039 "mask": "0x4", 00:05:44.039 "tpoint_mask": "0x0" 00:05:44.039 }, 00:05:44.039 "bdev": { 00:05:44.039 "mask": "0x8", 00:05:44.039 "tpoint_mask": "0xffffffffffffffff" 00:05:44.039 }, 00:05:44.039 "nvmf_rdma": { 00:05:44.039 "mask": "0x10", 00:05:44.039 "tpoint_mask": "0x0" 00:05:44.039 }, 00:05:44.039 "nvmf_tcp": { 00:05:44.039 "mask": "0x20", 00:05:44.039 "tpoint_mask": "0x0" 00:05:44.039 }, 00:05:44.039 "ftl": { 00:05:44.039 "mask": "0x40", 00:05:44.039 "tpoint_mask": "0x0" 00:05:44.039 }, 00:05:44.039 "blobfs": { 00:05:44.039 "mask": "0x80", 00:05:44.039 "tpoint_mask": "0x0" 00:05:44.039 }, 00:05:44.039 "dsa": { 00:05:44.039 "mask": "0x200", 00:05:44.039 "tpoint_mask": "0x0" 00:05:44.039 }, 00:05:44.039 "thread": { 00:05:44.039 "mask": "0x400", 00:05:44.039 "tpoint_mask": "0x0" 00:05:44.039 }, 00:05:44.039 "nvme_pcie": { 00:05:44.039 "mask": "0x800", 00:05:44.039 "tpoint_mask": "0x0" 00:05:44.039 }, 00:05:44.039 "iaa": { 00:05:44.039 "mask": "0x1000", 00:05:44.039 "tpoint_mask": "0x0" 00:05:44.039 }, 00:05:44.039 "nvme_tcp": { 00:05:44.039 "mask": "0x2000", 00:05:44.039 "tpoint_mask": "0x0" 00:05:44.039 }, 00:05:44.039 "bdev_nvme": { 00:05:44.039 "mask": "0x4000", 00:05:44.039 "tpoint_mask": "0x0" 00:05:44.039 }, 00:05:44.039 "sock": { 00:05:44.039 "mask": "0x8000", 00:05:44.039 "tpoint_mask": "0x0" 00:05:44.039 } 00:05:44.039 }' 00:05:44.039 22:56:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:44.297 22:56:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:44.297 22:56:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:44.297 22:56:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:44.297 22:56:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:44.297 22:56:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:44.297 22:56:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:44.297 22:56:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:44.297 22:56:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:44.297 22:56:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:44.297 00:05:44.297 real 0m0.235s 00:05:44.297 user 0m0.194s 00:05:44.297 sys 0m0.032s 00:05:44.297 22:56:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:44.297 22:56:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:44.297 ************************************ 00:05:44.297 END TEST rpc_trace_cmd_test 00:05:44.297 ************************************ 00:05:44.297 22:56:36 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:44.297 22:56:36 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:44.297 22:56:36 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:44.297 22:56:36 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:44.297 22:56:36 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:44.298 22:56:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.298 ************************************ 00:05:44.298 START TEST rpc_daemon_integrity 00:05:44.298 ************************************ 00:05:44.298 22:56:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # rpc_integrity 00:05:44.298 22:56:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:44.298 22:56:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:44.298 22:56:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:44.557 22:56:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:44.557 22:56:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:44.557 22:56:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:44.557 22:56:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:44.557 22:56:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:44.557 22:56:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:44.557 22:56:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:44.557 22:56:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:44.557 22:56:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:44.557 22:56:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:44.557 22:56:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:44.557 22:56:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:44.557 22:56:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:44.557 22:56:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:44.557 { 00:05:44.557 "name": "Malloc2", 00:05:44.557 "aliases": [ 00:05:44.557 "b946d019-8c7b-40f1-88e7-ea8f0523bbb1" 00:05:44.557 ], 00:05:44.557 "product_name": "Malloc disk", 00:05:44.557 "block_size": 512, 00:05:44.557 "num_blocks": 16384, 00:05:44.557 "uuid": "b946d019-8c7b-40f1-88e7-ea8f0523bbb1", 00:05:44.557 "assigned_rate_limits": { 00:05:44.557 "rw_ios_per_sec": 0, 00:05:44.557 "rw_mbytes_per_sec": 0, 00:05:44.557 "r_mbytes_per_sec": 0, 00:05:44.557 "w_mbytes_per_sec": 0 00:05:44.557 }, 00:05:44.557 "claimed": false, 00:05:44.557 "zoned": false, 00:05:44.557 "supported_io_types": { 00:05:44.557 "read": true, 00:05:44.557 "write": true, 00:05:44.557 "unmap": true, 00:05:44.557 "write_zeroes": true, 00:05:44.557 "flush": true, 00:05:44.557 "reset": true, 00:05:44.557 "compare": false, 00:05:44.557 "compare_and_write": false, 00:05:44.557 "abort": true, 00:05:44.557 "nvme_admin": false, 00:05:44.557 "nvme_io": false 00:05:44.557 }, 00:05:44.557 "memory_domains": [ 00:05:44.557 { 00:05:44.557 "dma_device_id": "system", 00:05:44.557 "dma_device_type": 1 00:05:44.557 }, 00:05:44.557 { 00:05:44.557 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:44.557 "dma_device_type": 2 00:05:44.557 } 00:05:44.557 ], 00:05:44.557 "driver_specific": {} 00:05:44.557 } 00:05:44.557 ]' 00:05:44.557 22:56:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:44.557 22:56:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:44.557 22:56:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:44.557 22:56:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:44.557 22:56:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:44.557 [2024-06-07 22:56:36.707515] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:44.557 [2024-06-07 22:56:36.707555] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:44.557 [2024-06-07 22:56:36.707584] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x5992240 00:05:44.557 [2024-06-07 22:56:36.707598] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:44.557 [2024-06-07 22:56:36.708572] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:44.557 [2024-06-07 22:56:36.708607] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:44.557 Passthru0 00:05:44.557 22:56:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:44.557 22:56:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:44.557 22:56:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:44.558 22:56:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:44.558 22:56:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:44.558 22:56:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:44.558 { 00:05:44.558 "name": "Malloc2", 00:05:44.558 "aliases": [ 00:05:44.558 "b946d019-8c7b-40f1-88e7-ea8f0523bbb1" 00:05:44.558 ], 00:05:44.558 "product_name": "Malloc disk", 00:05:44.558 "block_size": 512, 00:05:44.558 "num_blocks": 16384, 00:05:44.558 "uuid": "b946d019-8c7b-40f1-88e7-ea8f0523bbb1", 00:05:44.558 "assigned_rate_limits": { 00:05:44.558 "rw_ios_per_sec": 0, 00:05:44.558 "rw_mbytes_per_sec": 0, 00:05:44.558 "r_mbytes_per_sec": 0, 00:05:44.558 "w_mbytes_per_sec": 0 00:05:44.558 }, 00:05:44.558 "claimed": true, 00:05:44.558 "claim_type": "exclusive_write", 00:05:44.558 "zoned": false, 00:05:44.558 "supported_io_types": { 00:05:44.558 "read": true, 00:05:44.558 "write": true, 00:05:44.558 "unmap": true, 00:05:44.558 "write_zeroes": true, 00:05:44.558 "flush": true, 00:05:44.558 "reset": true, 00:05:44.558 "compare": false, 00:05:44.558 "compare_and_write": false, 00:05:44.558 "abort": true, 00:05:44.558 "nvme_admin": false, 00:05:44.558 "nvme_io": false 00:05:44.558 }, 00:05:44.558 "memory_domains": [ 00:05:44.558 { 00:05:44.558 "dma_device_id": "system", 00:05:44.558 "dma_device_type": 1 00:05:44.558 }, 00:05:44.558 { 00:05:44.558 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:44.558 "dma_device_type": 2 00:05:44.558 } 00:05:44.558 ], 00:05:44.558 "driver_specific": {} 00:05:44.558 }, 00:05:44.558 { 00:05:44.558 "name": "Passthru0", 00:05:44.558 "aliases": [ 00:05:44.558 "4fea40b3-9fa0-5751-9274-2c0686a7b3f7" 00:05:44.558 ], 00:05:44.558 "product_name": "passthru", 00:05:44.558 "block_size": 512, 00:05:44.558 "num_blocks": 16384, 00:05:44.558 "uuid": "4fea40b3-9fa0-5751-9274-2c0686a7b3f7", 00:05:44.558 "assigned_rate_limits": { 00:05:44.558 "rw_ios_per_sec": 0, 00:05:44.558 "rw_mbytes_per_sec": 0, 00:05:44.558 "r_mbytes_per_sec": 0, 00:05:44.558 "w_mbytes_per_sec": 0 00:05:44.558 }, 00:05:44.558 "claimed": false, 00:05:44.558 "zoned": false, 00:05:44.558 "supported_io_types": { 00:05:44.558 "read": true, 00:05:44.558 "write": true, 00:05:44.558 "unmap": true, 00:05:44.558 "write_zeroes": true, 00:05:44.558 "flush": true, 00:05:44.558 "reset": true, 00:05:44.558 "compare": false, 00:05:44.558 "compare_and_write": false, 00:05:44.558 "abort": true, 00:05:44.558 "nvme_admin": false, 00:05:44.558 "nvme_io": false 00:05:44.558 }, 00:05:44.558 "memory_domains": [ 00:05:44.558 { 00:05:44.558 "dma_device_id": "system", 00:05:44.558 "dma_device_type": 1 00:05:44.558 }, 00:05:44.558 { 00:05:44.558 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:44.558 "dma_device_type": 2 00:05:44.558 } 00:05:44.558 ], 00:05:44.558 "driver_specific": { 00:05:44.558 "passthru": { 00:05:44.558 "name": "Passthru0", 00:05:44.558 "base_bdev_name": "Malloc2" 00:05:44.558 } 00:05:44.558 } 00:05:44.558 } 00:05:44.558 ]' 00:05:44.558 22:56:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:44.558 22:56:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:44.558 22:56:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:44.558 22:56:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:44.558 22:56:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:44.558 22:56:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:44.558 22:56:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:44.558 22:56:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:44.558 22:56:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:44.558 22:56:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:44.558 22:56:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:44.558 22:56:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:44.558 22:56:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:44.558 22:56:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:44.558 22:56:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:44.558 22:56:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:44.817 22:56:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:44.817 00:05:44.817 real 0m0.289s 00:05:44.817 user 0m0.176s 00:05:44.817 sys 0m0.057s 00:05:44.817 22:56:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:44.817 22:56:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:44.817 ************************************ 00:05:44.817 END TEST rpc_daemon_integrity 00:05:44.818 ************************************ 00:05:44.818 22:56:36 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:44.818 22:56:36 rpc -- rpc/rpc.sh@84 -- # killprocess 4126409 00:05:44.818 22:56:36 rpc -- common/autotest_common.sh@949 -- # '[' -z 4126409 ']' 00:05:44.818 22:56:36 rpc -- common/autotest_common.sh@953 -- # kill -0 4126409 00:05:44.818 22:56:36 rpc -- common/autotest_common.sh@954 -- # uname 00:05:44.818 22:56:36 rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:44.818 22:56:36 rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4126409 00:05:44.818 22:56:36 rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:44.818 22:56:36 rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:44.818 22:56:36 rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4126409' 00:05:44.818 killing process with pid 4126409 00:05:44.818 22:56:36 rpc -- common/autotest_common.sh@968 -- # kill 4126409 00:05:44.818 22:56:36 rpc -- common/autotest_common.sh@973 -- # wait 4126409 00:05:45.077 00:05:45.077 real 0m2.714s 00:05:45.077 user 0m3.475s 00:05:45.077 sys 0m0.848s 00:05:45.077 22:56:37 rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:45.077 22:56:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.077 ************************************ 00:05:45.077 END TEST rpc 00:05:45.077 ************************************ 00:05:45.077 22:56:37 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:45.077 22:56:37 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:45.077 22:56:37 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:45.077 22:56:37 -- common/autotest_common.sh@10 -- # set +x 00:05:45.337 ************************************ 00:05:45.337 START TEST skip_rpc 00:05:45.337 ************************************ 00:05:45.337 22:56:37 skip_rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:45.337 * Looking for test storage... 00:05:45.337 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:05:45.337 22:56:37 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:05:45.337 22:56:37 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:05:45.337 22:56:37 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:45.337 22:56:37 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:45.337 22:56:37 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:45.337 22:56:37 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.337 ************************************ 00:05:45.337 START TEST skip_rpc 00:05:45.337 ************************************ 00:05:45.337 22:56:37 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # test_skip_rpc 00:05:45.337 22:56:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=4127116 00:05:45.337 22:56:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:45.337 22:56:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:45.337 22:56:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:45.337 [2024-06-07 22:56:37.500536] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:05:45.337 [2024-06-07 22:56:37.500586] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4127116 ] 00:05:45.337 EAL: No free 2048 kB hugepages reported on node 1 00:05:45.337 [2024-06-07 22:56:37.601614] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.596 [2024-06-07 22:56:37.689709] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.870 22:56:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:50.870 22:56:42 skip_rpc.skip_rpc -- common/autotest_common.sh@649 -- # local es=0 00:05:50.870 22:56:42 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:50.870 22:56:42 skip_rpc.skip_rpc -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:05:50.870 22:56:42 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:50.870 22:56:42 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:05:50.870 22:56:42 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:50.870 22:56:42 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # rpc_cmd spdk_get_version 00:05:50.870 22:56:42 skip_rpc.skip_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:50.870 22:56:42 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.870 22:56:42 skip_rpc.skip_rpc -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:05:50.870 22:56:42 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # es=1 00:05:50.870 22:56:42 skip_rpc.skip_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:50.871 22:56:42 skip_rpc.skip_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:05:50.871 22:56:42 skip_rpc.skip_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:50.871 22:56:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:50.871 22:56:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 4127116 00:05:50.871 22:56:42 skip_rpc.skip_rpc -- common/autotest_common.sh@949 -- # '[' -z 4127116 ']' 00:05:50.871 22:56:42 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # kill -0 4127116 00:05:50.871 22:56:42 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # uname 00:05:50.871 22:56:42 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:50.871 22:56:42 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4127116 00:05:50.871 22:56:42 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:50.871 22:56:42 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:50.871 22:56:42 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4127116' 00:05:50.871 killing process with pid 4127116 00:05:50.871 22:56:42 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # kill 4127116 00:05:50.871 22:56:42 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # wait 4127116 00:05:50.871 00:05:50.871 real 0m5.390s 00:05:50.871 user 0m5.106s 00:05:50.871 sys 0m0.316s 00:05:50.871 22:56:42 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:50.871 22:56:42 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.871 ************************************ 00:05:50.871 END TEST skip_rpc 00:05:50.871 ************************************ 00:05:50.871 22:56:42 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:50.871 22:56:42 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:50.871 22:56:42 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:50.871 22:56:42 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.871 ************************************ 00:05:50.871 START TEST skip_rpc_with_json 00:05:50.871 ************************************ 00:05:50.871 22:56:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # test_skip_rpc_with_json 00:05:50.871 22:56:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:50.871 22:56:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=4128177 00:05:50.871 22:56:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:50.871 22:56:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 4128177 00:05:50.871 22:56:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@830 -- # '[' -z 4128177 ']' 00:05:50.871 22:56:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.871 22:56:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:50.871 22:56:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.871 22:56:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:50.871 22:56:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:50.871 22:56:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:50.871 [2024-06-07 22:56:42.980759] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:05:50.871 [2024-06-07 22:56:42.980825] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4128177 ] 00:05:50.871 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.871 [2024-06-07 22:56:43.096966] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.130 [2024-06-07 22:56:43.187868] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.697 22:56:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:51.697 22:56:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@863 -- # return 0 00:05:51.697 22:56:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:51.697 22:56:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:51.697 22:56:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:51.697 [2024-06-07 22:56:43.905164] nvmf_rpc.c:2558:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:51.697 request: 00:05:51.697 { 00:05:51.697 "trtype": "tcp", 00:05:51.697 "method": "nvmf_get_transports", 00:05:51.697 "req_id": 1 00:05:51.697 } 00:05:51.697 Got JSON-RPC error response 00:05:51.697 response: 00:05:51.697 { 00:05:51.697 "code": -19, 00:05:51.697 "message": "No such device" 00:05:51.697 } 00:05:51.697 22:56:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:05:51.697 22:56:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:51.697 22:56:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:51.697 22:56:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:51.697 [2024-06-07 22:56:43.913259] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:51.697 22:56:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:51.697 22:56:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:51.697 22:56:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:51.697 22:56:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:51.956 22:56:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:51.956 22:56:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:05:51.956 { 00:05:51.956 "subsystems": [ 00:05:51.956 { 00:05:51.956 "subsystem": "scheduler", 00:05:51.956 "config": [ 00:05:51.957 { 00:05:51.957 "method": "framework_set_scheduler", 00:05:51.957 "params": { 00:05:51.957 "name": "static" 00:05:51.957 } 00:05:51.957 } 00:05:51.957 ] 00:05:51.957 }, 00:05:51.957 { 00:05:51.957 "subsystem": "vmd", 00:05:51.957 "config": [] 00:05:51.957 }, 00:05:51.957 { 00:05:51.957 "subsystem": "sock", 00:05:51.957 "config": [ 00:05:51.957 { 00:05:51.957 "method": "sock_set_default_impl", 00:05:51.957 "params": { 00:05:51.957 "impl_name": "posix" 00:05:51.957 } 00:05:51.957 }, 00:05:51.957 { 00:05:51.957 "method": "sock_impl_set_options", 00:05:51.957 "params": { 00:05:51.957 "impl_name": "ssl", 00:05:51.957 "recv_buf_size": 4096, 00:05:51.957 "send_buf_size": 4096, 00:05:51.957 "enable_recv_pipe": true, 00:05:51.957 "enable_quickack": false, 00:05:51.957 "enable_placement_id": 0, 00:05:51.957 "enable_zerocopy_send_server": true, 00:05:51.957 "enable_zerocopy_send_client": false, 00:05:51.957 "zerocopy_threshold": 0, 00:05:51.957 "tls_version": 0, 00:05:51.957 "enable_ktls": false 00:05:51.957 } 00:05:51.957 }, 00:05:51.957 { 00:05:51.957 "method": "sock_impl_set_options", 00:05:51.957 "params": { 00:05:51.957 "impl_name": "posix", 00:05:51.957 "recv_buf_size": 2097152, 00:05:51.957 "send_buf_size": 2097152, 00:05:51.957 "enable_recv_pipe": true, 00:05:51.957 "enable_quickack": false, 00:05:51.957 "enable_placement_id": 0, 00:05:51.957 "enable_zerocopy_send_server": true, 00:05:51.957 "enable_zerocopy_send_client": false, 00:05:51.957 "zerocopy_threshold": 0, 00:05:51.957 "tls_version": 0, 00:05:51.957 "enable_ktls": false 00:05:51.957 } 00:05:51.957 } 00:05:51.957 ] 00:05:51.957 }, 00:05:51.957 { 00:05:51.957 "subsystem": "iobuf", 00:05:51.957 "config": [ 00:05:51.957 { 00:05:51.957 "method": "iobuf_set_options", 00:05:51.957 "params": { 00:05:51.957 "small_pool_count": 8192, 00:05:51.957 "large_pool_count": 1024, 00:05:51.957 "small_bufsize": 8192, 00:05:51.957 "large_bufsize": 135168 00:05:51.957 } 00:05:51.957 } 00:05:51.957 ] 00:05:51.957 }, 00:05:51.957 { 00:05:51.957 "subsystem": "keyring", 00:05:51.957 "config": [] 00:05:51.957 }, 00:05:51.957 { 00:05:51.957 "subsystem": "vfio_user_target", 00:05:51.957 "config": null 00:05:51.957 }, 00:05:51.957 { 00:05:51.957 "subsystem": "accel", 00:05:51.957 "config": [ 00:05:51.957 { 00:05:51.957 "method": "accel_set_options", 00:05:51.957 "params": { 00:05:51.957 "small_cache_size": 128, 00:05:51.957 "large_cache_size": 16, 00:05:51.957 "task_count": 2048, 00:05:51.957 "sequence_count": 2048, 00:05:51.957 "buf_count": 2048 00:05:51.957 } 00:05:51.957 } 00:05:51.957 ] 00:05:51.957 }, 00:05:51.957 { 00:05:51.957 "subsystem": "bdev", 00:05:51.957 "config": [ 00:05:51.957 { 00:05:51.957 "method": "bdev_set_options", 00:05:51.957 "params": { 00:05:51.957 "bdev_io_pool_size": 65535, 00:05:51.957 "bdev_io_cache_size": 256, 00:05:51.957 "bdev_auto_examine": true, 00:05:51.957 "iobuf_small_cache_size": 128, 00:05:51.957 "iobuf_large_cache_size": 16 00:05:51.957 } 00:05:51.957 }, 00:05:51.957 { 00:05:51.957 "method": "bdev_raid_set_options", 00:05:51.957 "params": { 00:05:51.957 "process_window_size_kb": 1024 00:05:51.957 } 00:05:51.957 }, 00:05:51.957 { 00:05:51.957 "method": "bdev_nvme_set_options", 00:05:51.957 "params": { 00:05:51.957 "action_on_timeout": "none", 00:05:51.957 "timeout_us": 0, 00:05:51.957 "timeout_admin_us": 0, 00:05:51.957 "keep_alive_timeout_ms": 10000, 00:05:51.957 "arbitration_burst": 0, 00:05:51.957 "low_priority_weight": 0, 00:05:51.957 "medium_priority_weight": 0, 00:05:51.957 "high_priority_weight": 0, 00:05:51.957 "nvme_adminq_poll_period_us": 10000, 00:05:51.957 "nvme_ioq_poll_period_us": 0, 00:05:51.957 "io_queue_requests": 0, 00:05:51.957 "delay_cmd_submit": true, 00:05:51.957 "transport_retry_count": 4, 00:05:51.957 "bdev_retry_count": 3, 00:05:51.957 "transport_ack_timeout": 0, 00:05:51.957 "ctrlr_loss_timeout_sec": 0, 00:05:51.957 "reconnect_delay_sec": 0, 00:05:51.957 "fast_io_fail_timeout_sec": 0, 00:05:51.957 "disable_auto_failback": false, 00:05:51.957 "generate_uuids": false, 00:05:51.957 "transport_tos": 0, 00:05:51.957 "nvme_error_stat": false, 00:05:51.957 "rdma_srq_size": 0, 00:05:51.957 "io_path_stat": false, 00:05:51.957 "allow_accel_sequence": false, 00:05:51.957 "rdma_max_cq_size": 0, 00:05:51.957 "rdma_cm_event_timeout_ms": 0, 00:05:51.957 "dhchap_digests": [ 00:05:51.957 "sha256", 00:05:51.957 "sha384", 00:05:51.957 "sha512" 00:05:51.957 ], 00:05:51.957 "dhchap_dhgroups": [ 00:05:51.957 "null", 00:05:51.957 "ffdhe2048", 00:05:51.957 "ffdhe3072", 00:05:51.957 "ffdhe4096", 00:05:51.957 "ffdhe6144", 00:05:51.957 "ffdhe8192" 00:05:51.957 ] 00:05:51.957 } 00:05:51.957 }, 00:05:51.957 { 00:05:51.957 "method": "bdev_nvme_set_hotplug", 00:05:51.957 "params": { 00:05:51.957 "period_us": 100000, 00:05:51.957 "enable": false 00:05:51.957 } 00:05:51.957 }, 00:05:51.957 { 00:05:51.957 "method": "bdev_iscsi_set_options", 00:05:51.957 "params": { 00:05:51.957 "timeout_sec": 30 00:05:51.957 } 00:05:51.957 }, 00:05:51.957 { 00:05:51.957 "method": "bdev_wait_for_examine" 00:05:51.957 } 00:05:51.957 ] 00:05:51.957 }, 00:05:51.957 { 00:05:51.957 "subsystem": "nvmf", 00:05:51.957 "config": [ 00:05:51.957 { 00:05:51.957 "method": "nvmf_set_config", 00:05:51.957 "params": { 00:05:51.957 "discovery_filter": "match_any", 00:05:51.957 "admin_cmd_passthru": { 00:05:51.957 "identify_ctrlr": false 00:05:51.957 } 00:05:51.957 } 00:05:51.957 }, 00:05:51.957 { 00:05:51.957 "method": "nvmf_set_max_subsystems", 00:05:51.957 "params": { 00:05:51.957 "max_subsystems": 1024 00:05:51.957 } 00:05:51.957 }, 00:05:51.957 { 00:05:51.957 "method": "nvmf_set_crdt", 00:05:51.957 "params": { 00:05:51.957 "crdt1": 0, 00:05:51.957 "crdt2": 0, 00:05:51.957 "crdt3": 0 00:05:51.957 } 00:05:51.957 }, 00:05:51.957 { 00:05:51.957 "method": "nvmf_create_transport", 00:05:51.957 "params": { 00:05:51.957 "trtype": "TCP", 00:05:51.957 "max_queue_depth": 128, 00:05:51.957 "max_io_qpairs_per_ctrlr": 127, 00:05:51.957 "in_capsule_data_size": 4096, 00:05:51.957 "max_io_size": 131072, 00:05:51.957 "io_unit_size": 131072, 00:05:51.957 "max_aq_depth": 128, 00:05:51.957 "num_shared_buffers": 511, 00:05:51.957 "buf_cache_size": 4294967295, 00:05:51.957 "dif_insert_or_strip": false, 00:05:51.957 "zcopy": false, 00:05:51.957 "c2h_success": true, 00:05:51.957 "sock_priority": 0, 00:05:51.957 "abort_timeout_sec": 1, 00:05:51.957 "ack_timeout": 0, 00:05:51.957 "data_wr_pool_size": 0 00:05:51.957 } 00:05:51.957 } 00:05:51.957 ] 00:05:51.957 }, 00:05:51.957 { 00:05:51.957 "subsystem": "nbd", 00:05:51.957 "config": [] 00:05:51.957 }, 00:05:51.957 { 00:05:51.957 "subsystem": "ublk", 00:05:51.957 "config": [] 00:05:51.957 }, 00:05:51.957 { 00:05:51.957 "subsystem": "vhost_blk", 00:05:51.957 "config": [] 00:05:51.957 }, 00:05:51.957 { 00:05:51.957 "subsystem": "scsi", 00:05:51.957 "config": null 00:05:51.957 }, 00:05:51.957 { 00:05:51.957 "subsystem": "iscsi", 00:05:51.957 "config": [ 00:05:51.957 { 00:05:51.957 "method": "iscsi_set_options", 00:05:51.957 "params": { 00:05:51.957 "node_base": "iqn.2016-06.io.spdk", 00:05:51.957 "max_sessions": 128, 00:05:51.957 "max_connections_per_session": 2, 00:05:51.957 "max_queue_depth": 64, 00:05:51.957 "default_time2wait": 2, 00:05:51.957 "default_time2retain": 20, 00:05:51.957 "first_burst_length": 8192, 00:05:51.957 "immediate_data": true, 00:05:51.957 "allow_duplicated_isid": false, 00:05:51.957 "error_recovery_level": 0, 00:05:51.957 "nop_timeout": 60, 00:05:51.957 "nop_in_interval": 30, 00:05:51.957 "disable_chap": false, 00:05:51.957 "require_chap": false, 00:05:51.957 "mutual_chap": false, 00:05:51.957 "chap_group": 0, 00:05:51.957 "max_large_datain_per_connection": 64, 00:05:51.957 "max_r2t_per_connection": 4, 00:05:51.957 "pdu_pool_size": 36864, 00:05:51.957 "immediate_data_pool_size": 16384, 00:05:51.957 "data_out_pool_size": 2048 00:05:51.957 } 00:05:51.957 } 00:05:51.957 ] 00:05:51.957 }, 00:05:51.957 { 00:05:51.957 "subsystem": "vhost_scsi", 00:05:51.957 "config": [] 00:05:51.957 } 00:05:51.957 ] 00:05:51.957 } 00:05:51.957 22:56:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:51.957 22:56:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 4128177 00:05:51.958 22:56:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@949 -- # '[' -z 4128177 ']' 00:05:51.958 22:56:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # kill -0 4128177 00:05:51.958 22:56:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # uname 00:05:51.958 22:56:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:51.958 22:56:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4128177 00:05:51.958 22:56:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:51.958 22:56:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:51.958 22:56:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4128177' 00:05:51.958 killing process with pid 4128177 00:05:51.958 22:56:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # kill 4128177 00:05:51.958 22:56:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # wait 4128177 00:05:52.217 22:56:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=4128425 00:05:52.217 22:56:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:52.217 22:56:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:05:57.490 22:56:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 4128425 00:05:57.490 22:56:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@949 -- # '[' -z 4128425 ']' 00:05:57.490 22:56:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # kill -0 4128425 00:05:57.490 22:56:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # uname 00:05:57.490 22:56:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:57.490 22:56:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4128425 00:05:57.490 22:56:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:57.490 22:56:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:57.490 22:56:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4128425' 00:05:57.490 killing process with pid 4128425 00:05:57.490 22:56:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # kill 4128425 00:05:57.490 22:56:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # wait 4128425 00:05:57.750 22:56:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:05:57.750 22:56:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:05:57.750 00:05:57.750 real 0m6.889s 00:05:57.750 user 0m6.657s 00:05:57.750 sys 0m0.760s 00:05:57.750 22:56:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:57.750 22:56:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:57.750 ************************************ 00:05:57.750 END TEST skip_rpc_with_json 00:05:57.750 ************************************ 00:05:57.750 22:56:49 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:57.750 22:56:49 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:57.750 22:56:49 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:57.750 22:56:49 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.750 ************************************ 00:05:57.750 START TEST skip_rpc_with_delay 00:05:57.750 ************************************ 00:05:57.750 22:56:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # test_skip_rpc_with_delay 00:05:57.750 22:56:49 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:57.750 22:56:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@649 -- # local es=0 00:05:57.750 22:56:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:57.750 22:56:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:57.750 22:56:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:57.750 22:56:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:57.750 22:56:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:57.750 22:56:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:57.750 22:56:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:57.750 22:56:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:57.750 22:56:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:57.750 22:56:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:57.750 [2024-06-07 22:56:49.944493] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:57.750 [2024-06-07 22:56:49.944633] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:57.750 22:56:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # es=1 00:05:57.750 22:56:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:57.750 22:56:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:05:57.750 22:56:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:57.750 00:05:57.750 real 0m0.043s 00:05:57.750 user 0m0.016s 00:05:57.750 sys 0m0.026s 00:05:57.750 22:56:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:57.750 22:56:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:57.750 ************************************ 00:05:57.750 END TEST skip_rpc_with_delay 00:05:57.750 ************************************ 00:05:57.750 22:56:49 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:57.750 22:56:50 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:57.750 22:56:50 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:57.750 22:56:50 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:57.750 22:56:50 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:57.750 22:56:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.009 ************************************ 00:05:58.009 START TEST exit_on_failed_rpc_init 00:05:58.009 ************************************ 00:05:58.009 22:56:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # test_exit_on_failed_rpc_init 00:05:58.009 22:56:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=4129333 00:05:58.009 22:56:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 4129333 00:05:58.009 22:56:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:58.009 22:56:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@830 -- # '[' -z 4129333 ']' 00:05:58.009 22:56:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.009 22:56:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:58.009 22:56:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.009 22:56:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:58.009 22:56:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:58.009 [2024-06-07 22:56:50.067462] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:05:58.009 [2024-06-07 22:56:50.067543] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4129333 ] 00:05:58.009 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.009 [2024-06-07 22:56:50.184409] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.009 [2024-06-07 22:56:50.274896] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.948 22:56:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:58.948 22:56:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@863 -- # return 0 00:05:58.948 22:56:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:58.948 22:56:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:58.948 22:56:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@649 -- # local es=0 00:05:58.948 22:56:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:58.948 22:56:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:58.948 22:56:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:58.948 22:56:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:58.948 22:56:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:58.948 22:56:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:58.948 22:56:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:58.948 22:56:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:58.948 22:56:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:58.948 22:56:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:58.948 [2024-06-07 22:56:51.020107] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:05:58.948 [2024-06-07 22:56:51.020174] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4129600 ] 00:05:58.948 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.948 [2024-06-07 22:56:51.125585] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.948 [2024-06-07 22:56:51.211388] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:05:58.948 [2024-06-07 22:56:51.211495] rpc.c: 181:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:58.948 [2024-06-07 22:56:51.211511] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:58.948 [2024-06-07 22:56:51.211522] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:59.208 22:56:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # es=234 00:05:59.208 22:56:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:59.208 22:56:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # es=106 00:05:59.208 22:56:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # case "$es" in 00:05:59.208 22:56:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@669 -- # es=1 00:05:59.208 22:56:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:59.208 22:56:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:59.208 22:56:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 4129333 00:05:59.208 22:56:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@949 -- # '[' -z 4129333 ']' 00:05:59.208 22:56:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # kill -0 4129333 00:05:59.208 22:56:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # uname 00:05:59.208 22:56:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:59.208 22:56:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4129333 00:05:59.208 22:56:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:59.208 22:56:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:59.208 22:56:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4129333' 00:05:59.208 killing process with pid 4129333 00:05:59.208 22:56:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # kill 4129333 00:05:59.208 22:56:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # wait 4129333 00:05:59.468 00:05:59.468 real 0m1.623s 00:05:59.468 user 0m1.829s 00:05:59.468 sys 0m0.563s 00:05:59.468 22:56:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:59.468 22:56:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:59.468 ************************************ 00:05:59.468 END TEST exit_on_failed_rpc_init 00:05:59.468 ************************************ 00:05:59.468 22:56:51 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:05:59.468 00:05:59.468 real 0m14.350s 00:05:59.468 user 0m13.753s 00:05:59.468 sys 0m1.955s 00:05:59.468 22:56:51 skip_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:59.468 22:56:51 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.468 ************************************ 00:05:59.468 END TEST skip_rpc 00:05:59.468 ************************************ 00:05:59.727 22:56:51 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:59.727 22:56:51 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:59.727 22:56:51 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:59.727 22:56:51 -- common/autotest_common.sh@10 -- # set +x 00:05:59.727 ************************************ 00:05:59.727 START TEST rpc_client 00:05:59.727 ************************************ 00:05:59.727 22:56:51 rpc_client -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:59.727 * Looking for test storage... 00:05:59.727 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client 00:05:59.727 22:56:51 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:59.727 OK 00:05:59.727 22:56:51 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:59.727 00:05:59.727 real 0m0.133s 00:05:59.727 user 0m0.060s 00:05:59.727 sys 0m0.085s 00:05:59.727 22:56:51 rpc_client -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:59.727 22:56:51 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:59.727 ************************************ 00:05:59.727 END TEST rpc_client 00:05:59.727 ************************************ 00:05:59.727 22:56:51 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:05:59.727 22:56:51 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:59.727 22:56:51 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:59.727 22:56:51 -- common/autotest_common.sh@10 -- # set +x 00:05:59.987 ************************************ 00:05:59.987 START TEST json_config 00:05:59.987 ************************************ 00:05:59.987 22:56:52 json_config -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:05:59.987 22:56:52 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:05:59.987 22:56:52 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:59.987 22:56:52 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:59.987 22:56:52 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:59.987 22:56:52 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:59.987 22:56:52 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:59.987 22:56:52 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:59.987 22:56:52 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:59.987 22:56:52 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:59.987 22:56:52 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:59.987 22:56:52 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:59.987 22:56:52 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:59.987 22:56:52 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:05:59.987 22:56:52 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:05:59.987 22:56:52 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:59.987 22:56:52 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:59.987 22:56:52 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:59.987 22:56:52 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:59.987 22:56:52 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:05:59.987 22:56:52 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:59.987 22:56:52 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:59.987 22:56:52 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:59.987 22:56:52 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.987 22:56:52 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.987 22:56:52 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.987 22:56:52 json_config -- paths/export.sh@5 -- # export PATH 00:05:59.987 22:56:52 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.987 22:56:52 json_config -- nvmf/common.sh@47 -- # : 0 00:05:59.987 22:56:52 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:59.987 22:56:52 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:59.987 22:56:52 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:59.987 22:56:52 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:59.987 22:56:52 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:59.987 22:56:52 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:59.987 22:56:52 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:59.987 22:56:52 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:59.987 22:56:52 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/common.sh 00:05:59.987 22:56:52 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:59.987 22:56:52 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:59.987 22:56:52 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:59.987 22:56:52 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:59.987 22:56:52 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:59.987 WARNING: No tests are enabled so not running JSON configuration tests 00:05:59.987 22:56:52 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:59.987 00:05:59.987 real 0m0.110s 00:05:59.987 user 0m0.048s 00:05:59.987 sys 0m0.064s 00:05:59.987 22:56:52 json_config -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:59.987 22:56:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:59.987 ************************************ 00:05:59.987 END TEST json_config 00:05:59.987 ************************************ 00:05:59.987 22:56:52 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:59.987 22:56:52 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:59.987 22:56:52 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:59.987 22:56:52 -- common/autotest_common.sh@10 -- # set +x 00:05:59.987 ************************************ 00:05:59.987 START TEST json_config_extra_key 00:05:59.987 ************************************ 00:05:59.987 22:56:52 json_config_extra_key -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:00.249 22:56:52 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:06:00.249 22:56:52 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:00.249 22:56:52 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:00.249 22:56:52 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:00.249 22:56:52 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:00.249 22:56:52 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:00.249 22:56:52 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:00.249 22:56:52 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:00.249 22:56:52 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:00.250 22:56:52 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:00.250 22:56:52 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:00.250 22:56:52 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:00.250 22:56:52 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:06:00.250 22:56:52 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:06:00.250 22:56:52 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:00.250 22:56:52 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:00.250 22:56:52 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:00.250 22:56:52 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:00.250 22:56:52 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:06:00.250 22:56:52 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:00.250 22:56:52 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:00.250 22:56:52 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:00.250 22:56:52 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.250 22:56:52 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.250 22:56:52 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.250 22:56:52 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:00.250 22:56:52 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.250 22:56:52 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:06:00.250 22:56:52 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:00.250 22:56:52 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:00.250 22:56:52 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:00.250 22:56:52 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:00.250 22:56:52 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:00.250 22:56:52 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:00.250 22:56:52 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:00.250 22:56:52 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:00.250 22:56:52 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/common.sh 00:06:00.250 22:56:52 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:00.250 22:56:52 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:00.250 22:56:52 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:00.250 22:56:52 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:00.250 22:56:52 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:00.250 22:56:52 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:00.250 22:56:52 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:00.250 22:56:52 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:00.250 22:56:52 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:00.250 22:56:52 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:00.250 INFO: launching applications... 00:06:00.250 22:56:52 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:06:00.250 22:56:52 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:00.250 22:56:52 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:00.250 22:56:52 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:00.250 22:56:52 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:00.250 22:56:52 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:00.250 22:56:52 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:00.250 22:56:52 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:00.250 22:56:52 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=4130010 00:06:00.250 22:56:52 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:00.250 Waiting for target to run... 00:06:00.250 22:56:52 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 4130010 /var/tmp/spdk_tgt.sock 00:06:00.251 22:56:52 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:06:00.251 22:56:52 json_config_extra_key -- common/autotest_common.sh@830 -- # '[' -z 4130010 ']' 00:06:00.251 22:56:52 json_config_extra_key -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:00.251 22:56:52 json_config_extra_key -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:00.251 22:56:52 json_config_extra_key -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:00.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:00.251 22:56:52 json_config_extra_key -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:00.251 22:56:52 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:00.251 [2024-06-07 22:56:52.336942] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:06:00.251 [2024-06-07 22:56:52.337006] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4130010 ] 00:06:00.251 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.848 [2024-06-07 22:56:52.840327] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.848 [2024-06-07 22:56:52.942566] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.108 22:56:53 json_config_extra_key -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:01.108 22:56:53 json_config_extra_key -- common/autotest_common.sh@863 -- # return 0 00:06:01.108 22:56:53 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:01.108 00:06:01.108 22:56:53 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:01.108 INFO: shutting down applications... 00:06:01.108 22:56:53 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:01.108 22:56:53 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:01.108 22:56:53 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:01.108 22:56:53 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 4130010 ]] 00:06:01.108 22:56:53 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 4130010 00:06:01.108 22:56:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:01.108 22:56:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:01.108 22:56:53 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 4130010 00:06:01.108 22:56:53 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:01.677 22:56:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:01.677 22:56:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:01.677 22:56:53 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 4130010 00:06:01.677 22:56:53 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:01.677 22:56:53 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:01.677 22:56:53 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:01.677 22:56:53 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:01.677 SPDK target shutdown done 00:06:01.677 22:56:53 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:01.677 Success 00:06:01.677 00:06:01.677 real 0m1.578s 00:06:01.678 user 0m1.214s 00:06:01.678 sys 0m0.637s 00:06:01.678 22:56:53 json_config_extra_key -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:01.678 22:56:53 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:01.678 ************************************ 00:06:01.678 END TEST json_config_extra_key 00:06:01.678 ************************************ 00:06:01.678 22:56:53 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:01.678 22:56:53 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:01.678 22:56:53 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:01.678 22:56:53 -- common/autotest_common.sh@10 -- # set +x 00:06:01.678 ************************************ 00:06:01.678 START TEST alias_rpc 00:06:01.678 ************************************ 00:06:01.678 22:56:53 alias_rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:01.937 * Looking for test storage... 00:06:01.937 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc 00:06:01.937 22:56:53 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:01.937 22:56:53 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=4130333 00:06:01.937 22:56:53 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:01.937 22:56:53 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 4130333 00:06:01.937 22:56:53 alias_rpc -- common/autotest_common.sh@830 -- # '[' -z 4130333 ']' 00:06:01.937 22:56:53 alias_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.937 22:56:53 alias_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:01.938 22:56:53 alias_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.938 22:56:53 alias_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:01.938 22:56:53 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.938 [2024-06-07 22:56:53.994903] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:06:01.938 [2024-06-07 22:56:53.994966] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4130333 ] 00:06:01.938 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.938 [2024-06-07 22:56:54.111903] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.938 [2024-06-07 22:56:54.200170] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.876 22:56:54 alias_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:02.876 22:56:54 alias_rpc -- common/autotest_common.sh@863 -- # return 0 00:06:02.876 22:56:54 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:03.136 22:56:55 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 4130333 00:06:03.136 22:56:55 alias_rpc -- common/autotest_common.sh@949 -- # '[' -z 4130333 ']' 00:06:03.136 22:56:55 alias_rpc -- common/autotest_common.sh@953 -- # kill -0 4130333 00:06:03.136 22:56:55 alias_rpc -- common/autotest_common.sh@954 -- # uname 00:06:03.136 22:56:55 alias_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:03.136 22:56:55 alias_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4130333 00:06:03.136 22:56:55 alias_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:03.136 22:56:55 alias_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:03.136 22:56:55 alias_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4130333' 00:06:03.136 killing process with pid 4130333 00:06:03.136 22:56:55 alias_rpc -- common/autotest_common.sh@968 -- # kill 4130333 00:06:03.136 22:56:55 alias_rpc -- common/autotest_common.sh@973 -- # wait 4130333 00:06:03.395 00:06:03.395 real 0m1.687s 00:06:03.395 user 0m1.881s 00:06:03.395 sys 0m0.504s 00:06:03.395 22:56:55 alias_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:03.395 22:56:55 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.395 ************************************ 00:06:03.396 END TEST alias_rpc 00:06:03.396 ************************************ 00:06:03.396 22:56:55 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:06:03.396 22:56:55 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:03.396 22:56:55 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:03.396 22:56:55 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:03.396 22:56:55 -- common/autotest_common.sh@10 -- # set +x 00:06:03.396 ************************************ 00:06:03.396 START TEST spdkcli_tcp 00:06:03.396 ************************************ 00:06:03.396 22:56:55 spdkcli_tcp -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:03.656 * Looking for test storage... 00:06:03.656 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli 00:06:03.656 22:56:55 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/common.sh 00:06:03.656 22:56:55 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:03.656 22:56:55 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/clear_config.py 00:06:03.656 22:56:55 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:03.656 22:56:55 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:03.656 22:56:55 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:03.656 22:56:55 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:03.657 22:56:55 spdkcli_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:06:03.657 22:56:55 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:03.657 22:56:55 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=4130657 00:06:03.657 22:56:55 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 4130657 00:06:03.657 22:56:55 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:03.657 22:56:55 spdkcli_tcp -- common/autotest_common.sh@830 -- # '[' -z 4130657 ']' 00:06:03.657 22:56:55 spdkcli_tcp -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.657 22:56:55 spdkcli_tcp -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:03.657 22:56:55 spdkcli_tcp -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.657 22:56:55 spdkcli_tcp -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:03.657 22:56:55 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:03.657 [2024-06-07 22:56:55.775931] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:06:03.657 [2024-06-07 22:56:55.775992] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4130657 ] 00:06:03.657 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.657 [2024-06-07 22:56:55.891791] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:03.917 [2024-06-07 22:56:55.983342] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:06:03.917 [2024-06-07 22:56:55.983347] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.485 22:56:56 spdkcli_tcp -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:04.485 22:56:56 spdkcli_tcp -- common/autotest_common.sh@863 -- # return 0 00:06:04.485 22:56:56 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=4130808 00:06:04.485 22:56:56 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:04.485 22:56:56 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:04.745 [ 00:06:04.745 "spdk_get_version", 00:06:04.745 "rpc_get_methods", 00:06:04.745 "trace_get_info", 00:06:04.745 "trace_get_tpoint_group_mask", 00:06:04.745 "trace_disable_tpoint_group", 00:06:04.745 "trace_enable_tpoint_group", 00:06:04.745 "trace_clear_tpoint_mask", 00:06:04.745 "trace_set_tpoint_mask", 00:06:04.745 "vfu_tgt_set_base_path", 00:06:04.745 "framework_get_pci_devices", 00:06:04.745 "framework_get_config", 00:06:04.745 "framework_get_subsystems", 00:06:04.745 "keyring_get_keys", 00:06:04.745 "iobuf_get_stats", 00:06:04.745 "iobuf_set_options", 00:06:04.745 "sock_get_default_impl", 00:06:04.745 "sock_set_default_impl", 00:06:04.745 "sock_impl_set_options", 00:06:04.745 "sock_impl_get_options", 00:06:04.745 "vmd_rescan", 00:06:04.745 "vmd_remove_device", 00:06:04.745 "vmd_enable", 00:06:04.745 "accel_get_stats", 00:06:04.745 "accel_set_options", 00:06:04.745 "accel_set_driver", 00:06:04.745 "accel_crypto_key_destroy", 00:06:04.745 "accel_crypto_keys_get", 00:06:04.745 "accel_crypto_key_create", 00:06:04.745 "accel_assign_opc", 00:06:04.745 "accel_get_module_info", 00:06:04.745 "accel_get_opc_assignments", 00:06:04.745 "notify_get_notifications", 00:06:04.745 "notify_get_types", 00:06:04.745 "bdev_get_histogram", 00:06:04.745 "bdev_enable_histogram", 00:06:04.745 "bdev_set_qos_limit", 00:06:04.745 "bdev_set_qd_sampling_period", 00:06:04.745 "bdev_get_bdevs", 00:06:04.745 "bdev_reset_iostat", 00:06:04.745 "bdev_get_iostat", 00:06:04.745 "bdev_examine", 00:06:04.745 "bdev_wait_for_examine", 00:06:04.745 "bdev_set_options", 00:06:04.745 "scsi_get_devices", 00:06:04.745 "thread_set_cpumask", 00:06:04.745 "framework_get_scheduler", 00:06:04.745 "framework_set_scheduler", 00:06:04.745 "framework_get_reactors", 00:06:04.745 "thread_get_io_channels", 00:06:04.745 "thread_get_pollers", 00:06:04.745 "thread_get_stats", 00:06:04.745 "framework_monitor_context_switch", 00:06:04.745 "spdk_kill_instance", 00:06:04.745 "log_enable_timestamps", 00:06:04.745 "log_get_flags", 00:06:04.745 "log_clear_flag", 00:06:04.745 "log_set_flag", 00:06:04.745 "log_get_level", 00:06:04.745 "log_set_level", 00:06:04.745 "log_get_print_level", 00:06:04.745 "log_set_print_level", 00:06:04.745 "framework_enable_cpumask_locks", 00:06:04.745 "framework_disable_cpumask_locks", 00:06:04.745 "framework_wait_init", 00:06:04.745 "framework_start_init", 00:06:04.745 "virtio_blk_create_transport", 00:06:04.745 "virtio_blk_get_transports", 00:06:04.745 "vhost_controller_set_coalescing", 00:06:04.745 "vhost_get_controllers", 00:06:04.745 "vhost_delete_controller", 00:06:04.745 "vhost_create_blk_controller", 00:06:04.745 "vhost_scsi_controller_remove_target", 00:06:04.745 "vhost_scsi_controller_add_target", 00:06:04.745 "vhost_start_scsi_controller", 00:06:04.745 "vhost_create_scsi_controller", 00:06:04.745 "ublk_recover_disk", 00:06:04.745 "ublk_get_disks", 00:06:04.745 "ublk_stop_disk", 00:06:04.745 "ublk_start_disk", 00:06:04.745 "ublk_destroy_target", 00:06:04.745 "ublk_create_target", 00:06:04.745 "nbd_get_disks", 00:06:04.745 "nbd_stop_disk", 00:06:04.745 "nbd_start_disk", 00:06:04.745 "env_dpdk_get_mem_stats", 00:06:04.745 "nvmf_stop_mdns_prr", 00:06:04.745 "nvmf_publish_mdns_prr", 00:06:04.745 "nvmf_subsystem_get_listeners", 00:06:04.745 "nvmf_subsystem_get_qpairs", 00:06:04.745 "nvmf_subsystem_get_controllers", 00:06:04.745 "nvmf_get_stats", 00:06:04.745 "nvmf_get_transports", 00:06:04.745 "nvmf_create_transport", 00:06:04.745 "nvmf_get_targets", 00:06:04.745 "nvmf_delete_target", 00:06:04.745 "nvmf_create_target", 00:06:04.745 "nvmf_subsystem_allow_any_host", 00:06:04.745 "nvmf_subsystem_remove_host", 00:06:04.745 "nvmf_subsystem_add_host", 00:06:04.745 "nvmf_ns_remove_host", 00:06:04.745 "nvmf_ns_add_host", 00:06:04.745 "nvmf_subsystem_remove_ns", 00:06:04.745 "nvmf_subsystem_add_ns", 00:06:04.745 "nvmf_subsystem_listener_set_ana_state", 00:06:04.745 "nvmf_discovery_get_referrals", 00:06:04.745 "nvmf_discovery_remove_referral", 00:06:04.745 "nvmf_discovery_add_referral", 00:06:04.745 "nvmf_subsystem_remove_listener", 00:06:04.745 "nvmf_subsystem_add_listener", 00:06:04.745 "nvmf_delete_subsystem", 00:06:04.745 "nvmf_create_subsystem", 00:06:04.745 "nvmf_get_subsystems", 00:06:04.745 "nvmf_set_crdt", 00:06:04.745 "nvmf_set_config", 00:06:04.745 "nvmf_set_max_subsystems", 00:06:04.745 "iscsi_get_histogram", 00:06:04.745 "iscsi_enable_histogram", 00:06:04.745 "iscsi_set_options", 00:06:04.745 "iscsi_get_auth_groups", 00:06:04.745 "iscsi_auth_group_remove_secret", 00:06:04.745 "iscsi_auth_group_add_secret", 00:06:04.745 "iscsi_delete_auth_group", 00:06:04.745 "iscsi_create_auth_group", 00:06:04.745 "iscsi_set_discovery_auth", 00:06:04.745 "iscsi_get_options", 00:06:04.745 "iscsi_target_node_request_logout", 00:06:04.745 "iscsi_target_node_set_redirect", 00:06:04.746 "iscsi_target_node_set_auth", 00:06:04.746 "iscsi_target_node_add_lun", 00:06:04.746 "iscsi_get_stats", 00:06:04.746 "iscsi_get_connections", 00:06:04.746 "iscsi_portal_group_set_auth", 00:06:04.746 "iscsi_start_portal_group", 00:06:04.746 "iscsi_delete_portal_group", 00:06:04.746 "iscsi_create_portal_group", 00:06:04.746 "iscsi_get_portal_groups", 00:06:04.746 "iscsi_delete_target_node", 00:06:04.746 "iscsi_target_node_remove_pg_ig_maps", 00:06:04.746 "iscsi_target_node_add_pg_ig_maps", 00:06:04.746 "iscsi_create_target_node", 00:06:04.746 "iscsi_get_target_nodes", 00:06:04.746 "iscsi_delete_initiator_group", 00:06:04.746 "iscsi_initiator_group_remove_initiators", 00:06:04.746 "iscsi_initiator_group_add_initiators", 00:06:04.746 "iscsi_create_initiator_group", 00:06:04.746 "iscsi_get_initiator_groups", 00:06:04.746 "keyring_linux_set_options", 00:06:04.746 "keyring_file_remove_key", 00:06:04.746 "keyring_file_add_key", 00:06:04.746 "vfu_virtio_create_scsi_endpoint", 00:06:04.746 "vfu_virtio_scsi_remove_target", 00:06:04.746 "vfu_virtio_scsi_add_target", 00:06:04.746 "vfu_virtio_create_blk_endpoint", 00:06:04.746 "vfu_virtio_delete_endpoint", 00:06:04.746 "iaa_scan_accel_module", 00:06:04.746 "dsa_scan_accel_module", 00:06:04.746 "ioat_scan_accel_module", 00:06:04.746 "accel_error_inject_error", 00:06:04.746 "bdev_iscsi_delete", 00:06:04.746 "bdev_iscsi_create", 00:06:04.746 "bdev_iscsi_set_options", 00:06:04.746 "bdev_virtio_attach_controller", 00:06:04.746 "bdev_virtio_scsi_get_devices", 00:06:04.746 "bdev_virtio_detach_controller", 00:06:04.746 "bdev_virtio_blk_set_hotplug", 00:06:04.746 "bdev_ftl_set_property", 00:06:04.746 "bdev_ftl_get_properties", 00:06:04.746 "bdev_ftl_get_stats", 00:06:04.746 "bdev_ftl_unmap", 00:06:04.746 "bdev_ftl_unload", 00:06:04.746 "bdev_ftl_delete", 00:06:04.746 "bdev_ftl_load", 00:06:04.746 "bdev_ftl_create", 00:06:04.746 "bdev_aio_delete", 00:06:04.746 "bdev_aio_rescan", 00:06:04.746 "bdev_aio_create", 00:06:04.746 "blobfs_create", 00:06:04.746 "blobfs_detect", 00:06:04.746 "blobfs_set_cache_size", 00:06:04.746 "bdev_zone_block_delete", 00:06:04.746 "bdev_zone_block_create", 00:06:04.746 "bdev_delay_delete", 00:06:04.746 "bdev_delay_create", 00:06:04.746 "bdev_delay_update_latency", 00:06:04.746 "bdev_split_delete", 00:06:04.746 "bdev_split_create", 00:06:04.746 "bdev_error_inject_error", 00:06:04.746 "bdev_error_delete", 00:06:04.746 "bdev_error_create", 00:06:04.746 "bdev_raid_set_options", 00:06:04.746 "bdev_raid_remove_base_bdev", 00:06:04.746 "bdev_raid_add_base_bdev", 00:06:04.746 "bdev_raid_delete", 00:06:04.746 "bdev_raid_create", 00:06:04.746 "bdev_raid_get_bdevs", 00:06:04.746 "bdev_lvol_set_parent_bdev", 00:06:04.746 "bdev_lvol_set_parent", 00:06:04.746 "bdev_lvol_check_shallow_copy", 00:06:04.746 "bdev_lvol_start_shallow_copy", 00:06:04.746 "bdev_lvol_grow_lvstore", 00:06:04.746 "bdev_lvol_get_lvols", 00:06:04.746 "bdev_lvol_get_lvstores", 00:06:04.746 "bdev_lvol_delete", 00:06:04.746 "bdev_lvol_set_read_only", 00:06:04.746 "bdev_lvol_resize", 00:06:04.746 "bdev_lvol_decouple_parent", 00:06:04.746 "bdev_lvol_inflate", 00:06:04.746 "bdev_lvol_rename", 00:06:04.746 "bdev_lvol_clone_bdev", 00:06:04.746 "bdev_lvol_clone", 00:06:04.746 "bdev_lvol_snapshot", 00:06:04.746 "bdev_lvol_create", 00:06:04.746 "bdev_lvol_delete_lvstore", 00:06:04.746 "bdev_lvol_rename_lvstore", 00:06:04.746 "bdev_lvol_create_lvstore", 00:06:04.746 "bdev_passthru_delete", 00:06:04.746 "bdev_passthru_create", 00:06:04.746 "bdev_nvme_cuse_unregister", 00:06:04.746 "bdev_nvme_cuse_register", 00:06:04.746 "bdev_opal_new_user", 00:06:04.746 "bdev_opal_set_lock_state", 00:06:04.746 "bdev_opal_delete", 00:06:04.746 "bdev_opal_get_info", 00:06:04.746 "bdev_opal_create", 00:06:04.746 "bdev_nvme_opal_revert", 00:06:04.746 "bdev_nvme_opal_init", 00:06:04.746 "bdev_nvme_send_cmd", 00:06:04.746 "bdev_nvme_get_path_iostat", 00:06:04.746 "bdev_nvme_get_mdns_discovery_info", 00:06:04.746 "bdev_nvme_stop_mdns_discovery", 00:06:04.746 "bdev_nvme_start_mdns_discovery", 00:06:04.746 "bdev_nvme_set_multipath_policy", 00:06:04.746 "bdev_nvme_set_preferred_path", 00:06:04.746 "bdev_nvme_get_io_paths", 00:06:04.746 "bdev_nvme_remove_error_injection", 00:06:04.746 "bdev_nvme_add_error_injection", 00:06:04.746 "bdev_nvme_get_discovery_info", 00:06:04.746 "bdev_nvme_stop_discovery", 00:06:04.746 "bdev_nvme_start_discovery", 00:06:04.746 "bdev_nvme_get_controller_health_info", 00:06:04.746 "bdev_nvme_disable_controller", 00:06:04.746 "bdev_nvme_enable_controller", 00:06:04.746 "bdev_nvme_reset_controller", 00:06:04.746 "bdev_nvme_get_transport_statistics", 00:06:04.746 "bdev_nvme_apply_firmware", 00:06:04.746 "bdev_nvme_detach_controller", 00:06:04.746 "bdev_nvme_get_controllers", 00:06:04.746 "bdev_nvme_attach_controller", 00:06:04.746 "bdev_nvme_set_hotplug", 00:06:04.746 "bdev_nvme_set_options", 00:06:04.746 "bdev_null_resize", 00:06:04.746 "bdev_null_delete", 00:06:04.746 "bdev_null_create", 00:06:04.746 "bdev_malloc_delete", 00:06:04.746 "bdev_malloc_create" 00:06:04.746 ] 00:06:04.746 22:56:56 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:04.746 22:56:56 spdkcli_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:06:04.746 22:56:56 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:04.746 22:56:56 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:04.746 22:56:56 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 4130657 00:06:04.746 22:56:56 spdkcli_tcp -- common/autotest_common.sh@949 -- # '[' -z 4130657 ']' 00:06:04.746 22:56:56 spdkcli_tcp -- common/autotest_common.sh@953 -- # kill -0 4130657 00:06:04.746 22:56:56 spdkcli_tcp -- common/autotest_common.sh@954 -- # uname 00:06:04.746 22:56:56 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:04.746 22:56:56 spdkcli_tcp -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4130657 00:06:04.746 22:56:57 spdkcli_tcp -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:04.746 22:56:57 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:04.746 22:56:57 spdkcli_tcp -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4130657' 00:06:04.746 killing process with pid 4130657 00:06:04.746 22:56:57 spdkcli_tcp -- common/autotest_common.sh@968 -- # kill 4130657 00:06:04.746 22:56:57 spdkcli_tcp -- common/autotest_common.sh@973 -- # wait 4130657 00:06:05.314 00:06:05.314 real 0m1.713s 00:06:05.314 user 0m3.155s 00:06:05.314 sys 0m0.578s 00:06:05.314 22:56:57 spdkcli_tcp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:05.314 22:56:57 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:05.314 ************************************ 00:06:05.314 END TEST spdkcli_tcp 00:06:05.314 ************************************ 00:06:05.314 22:56:57 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:05.314 22:56:57 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:05.314 22:56:57 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:05.314 22:56:57 -- common/autotest_common.sh@10 -- # set +x 00:06:05.314 ************************************ 00:06:05.314 START TEST dpdk_mem_utility 00:06:05.314 ************************************ 00:06:05.314 22:56:57 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:05.314 * Looking for test storage... 00:06:05.314 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility 00:06:05.314 22:56:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:05.314 22:56:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=4130998 00:06:05.314 22:56:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 4130998 00:06:05.314 22:56:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:05.314 22:56:57 dpdk_mem_utility -- common/autotest_common.sh@830 -- # '[' -z 4130998 ']' 00:06:05.314 22:56:57 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.314 22:56:57 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:05.314 22:56:57 dpdk_mem_utility -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.314 22:56:57 dpdk_mem_utility -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:05.315 22:56:57 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:05.315 [2024-06-07 22:56:57.555555] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:06:05.315 [2024-06-07 22:56:57.555624] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4130998 ] 00:06:05.574 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.574 [2024-06-07 22:56:57.671653] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.574 [2024-06-07 22:56:57.758528] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.513 22:56:58 dpdk_mem_utility -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:06.513 22:56:58 dpdk_mem_utility -- common/autotest_common.sh@863 -- # return 0 00:06:06.513 22:56:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:06.513 22:56:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:06.513 22:56:58 dpdk_mem_utility -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:06.513 22:56:58 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:06.513 { 00:06:06.513 "filename": "/tmp/spdk_mem_dump.txt" 00:06:06.513 } 00:06:06.513 22:56:58 dpdk_mem_utility -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:06.513 22:56:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:06.513 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:06.513 1 heaps totaling size 814.000000 MiB 00:06:06.513 size: 814.000000 MiB heap id: 0 00:06:06.513 end heaps---------- 00:06:06.513 8 mempools totaling size 598.116089 MiB 00:06:06.513 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:06.513 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:06.513 size: 84.521057 MiB name: bdev_io_4130998 00:06:06.513 size: 51.011292 MiB name: evtpool_4130998 00:06:06.513 size: 50.003479 MiB name: msgpool_4130998 00:06:06.513 size: 21.763794 MiB name: PDU_Pool 00:06:06.513 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:06.513 size: 0.026123 MiB name: Session_Pool 00:06:06.513 end mempools------- 00:06:06.513 6 memzones totaling size 4.142822 MiB 00:06:06.513 size: 1.000366 MiB name: RG_ring_0_4130998 00:06:06.513 size: 1.000366 MiB name: RG_ring_1_4130998 00:06:06.513 size: 1.000366 MiB name: RG_ring_4_4130998 00:06:06.513 size: 1.000366 MiB name: RG_ring_5_4130998 00:06:06.513 size: 0.125366 MiB name: RG_ring_2_4130998 00:06:06.513 size: 0.015991 MiB name: RG_ring_3_4130998 00:06:06.513 end memzones------- 00:06:06.513 22:56:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:06.513 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:06:06.513 list of free elements. size: 12.519348 MiB 00:06:06.513 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:06.513 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:06.513 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:06.513 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:06.513 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:06.513 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:06.513 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:06.513 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:06.513 element at address: 0x200000200000 with size: 0.841614 MiB 00:06:06.513 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:06:06.513 element at address: 0x20000b200000 with size: 0.490723 MiB 00:06:06.513 element at address: 0x200000800000 with size: 0.487793 MiB 00:06:06.513 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:06.513 element at address: 0x200027e00000 with size: 0.410034 MiB 00:06:06.513 element at address: 0x200003a00000 with size: 0.355530 MiB 00:06:06.513 list of standard malloc elements. size: 199.218079 MiB 00:06:06.513 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:06.513 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:06.513 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:06.513 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:06.513 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:06.513 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:06.513 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:06.513 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:06.513 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:06.513 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:06:06.513 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:06:06.513 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:06:06.513 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:06.513 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:06.513 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:06.513 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:06.513 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:06.513 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:06.513 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:06.513 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:06.513 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:06.513 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:06.513 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:06.513 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:06.513 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:06.513 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:06.513 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:06.513 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:06.513 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:06.513 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:06.513 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:06.513 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:06.513 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:06.513 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:06.513 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:06.513 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:06.513 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:06:06.514 element at address: 0x200027e69040 with size: 0.000183 MiB 00:06:06.514 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:06:06.514 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:06.514 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:06.514 list of memzone associated elements. size: 602.262573 MiB 00:06:06.514 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:06.514 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:06.514 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:06.514 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:06.514 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:06.514 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_4130998_0 00:06:06.514 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:06.514 associated memzone info: size: 48.002930 MiB name: MP_evtpool_4130998_0 00:06:06.514 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:06.514 associated memzone info: size: 48.002930 MiB name: MP_msgpool_4130998_0 00:06:06.514 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:06.514 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:06.514 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:06.514 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:06.514 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:06.514 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_4130998 00:06:06.514 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:06.514 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_4130998 00:06:06.514 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:06.514 associated memzone info: size: 1.007996 MiB name: MP_evtpool_4130998 00:06:06.514 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:06.514 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:06.514 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:06.514 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:06.514 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:06.514 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:06.514 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:06.514 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:06.514 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:06.514 associated memzone info: size: 1.000366 MiB name: RG_ring_0_4130998 00:06:06.514 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:06.514 associated memzone info: size: 1.000366 MiB name: RG_ring_1_4130998 00:06:06.514 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:06.514 associated memzone info: size: 1.000366 MiB name: RG_ring_4_4130998 00:06:06.514 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:06.514 associated memzone info: size: 1.000366 MiB name: RG_ring_5_4130998 00:06:06.514 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:06.514 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_4130998 00:06:06.514 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:06.514 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:06.514 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:06.514 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:06.514 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:06.514 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:06.514 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:06.514 associated memzone info: size: 0.125366 MiB name: RG_ring_2_4130998 00:06:06.514 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:06.514 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:06.514 element at address: 0x200027e69100 with size: 0.023743 MiB 00:06:06.514 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:06.514 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:06.514 associated memzone info: size: 0.015991 MiB name: RG_ring_3_4130998 00:06:06.514 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:06:06.514 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:06.514 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:06:06.514 associated memzone info: size: 0.000183 MiB name: MP_msgpool_4130998 00:06:06.514 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:06.514 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_4130998 00:06:06.514 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:06:06.514 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:06.514 22:56:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:06.514 22:56:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 4130998 00:06:06.514 22:56:58 dpdk_mem_utility -- common/autotest_common.sh@949 -- # '[' -z 4130998 ']' 00:06:06.514 22:56:58 dpdk_mem_utility -- common/autotest_common.sh@953 -- # kill -0 4130998 00:06:06.514 22:56:58 dpdk_mem_utility -- common/autotest_common.sh@954 -- # uname 00:06:06.514 22:56:58 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:06.514 22:56:58 dpdk_mem_utility -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4130998 00:06:06.514 22:56:58 dpdk_mem_utility -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:06.514 22:56:58 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:06.514 22:56:58 dpdk_mem_utility -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4130998' 00:06:06.514 killing process with pid 4130998 00:06:06.514 22:56:58 dpdk_mem_utility -- common/autotest_common.sh@968 -- # kill 4130998 00:06:06.514 22:56:58 dpdk_mem_utility -- common/autotest_common.sh@973 -- # wait 4130998 00:06:06.774 00:06:06.774 real 0m1.518s 00:06:06.774 user 0m1.569s 00:06:06.774 sys 0m0.502s 00:06:06.774 22:56:58 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:06.774 22:56:58 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:06.774 ************************************ 00:06:06.774 END TEST dpdk_mem_utility 00:06:06.774 ************************************ 00:06:06.774 22:56:58 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:06:06.774 22:56:58 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:06.774 22:56:58 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:06.774 22:56:58 -- common/autotest_common.sh@10 -- # set +x 00:06:06.774 ************************************ 00:06:06.774 START TEST event 00:06:06.774 ************************************ 00:06:06.774 22:56:59 event -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:06:07.034 * Looking for test storage... 00:06:07.034 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:06:07.034 22:56:59 event -- event/event.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:07.034 22:56:59 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:07.034 22:56:59 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:07.034 22:56:59 event -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:06:07.034 22:56:59 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:07.034 22:56:59 event -- common/autotest_common.sh@10 -- # set +x 00:06:07.034 ************************************ 00:06:07.034 START TEST event_perf 00:06:07.034 ************************************ 00:06:07.034 22:56:59 event.event_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:07.034 Running I/O for 1 seconds...[2024-06-07 22:56:59.169400] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:06:07.034 [2024-06-07 22:56:59.169483] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4131322 ] 00:06:07.034 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.034 [2024-06-07 22:56:59.286815] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:07.293 [2024-06-07 22:56:59.377835] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:06:07.293 [2024-06-07 22:56:59.377929] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:06:07.293 [2024-06-07 22:56:59.378044] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:06:07.293 [2024-06-07 22:56:59.378045] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.232 Running I/O for 1 seconds... 00:06:08.232 lcore 0: 180666 00:06:08.232 lcore 1: 180665 00:06:08.232 lcore 2: 180667 00:06:08.232 lcore 3: 180667 00:06:08.232 done. 00:06:08.232 00:06:08.232 real 0m1.307s 00:06:08.232 user 0m4.171s 00:06:08.232 sys 0m0.126s 00:06:08.232 22:57:00 event.event_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:08.232 22:57:00 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:08.232 ************************************ 00:06:08.232 END TEST event_perf 00:06:08.232 ************************************ 00:06:08.232 22:57:00 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:08.232 22:57:00 event -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:06:08.232 22:57:00 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:08.232 22:57:00 event -- common/autotest_common.sh@10 -- # set +x 00:06:08.491 ************************************ 00:06:08.491 START TEST event_reactor 00:06:08.491 ************************************ 00:06:08.491 22:57:00 event.event_reactor -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:08.491 [2024-06-07 22:57:00.552662] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:06:08.491 [2024-06-07 22:57:00.552742] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4131650 ] 00:06:08.491 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.491 [2024-06-07 22:57:00.669415] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.491 [2024-06-07 22:57:00.756596] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.870 test_start 00:06:09.870 oneshot 00:06:09.870 tick 100 00:06:09.870 tick 100 00:06:09.870 tick 250 00:06:09.870 tick 100 00:06:09.870 tick 100 00:06:09.870 tick 100 00:06:09.870 tick 250 00:06:09.870 tick 500 00:06:09.870 tick 100 00:06:09.870 tick 100 00:06:09.870 tick 250 00:06:09.870 tick 100 00:06:09.870 tick 100 00:06:09.870 test_end 00:06:09.870 00:06:09.870 real 0m1.297s 00:06:09.870 user 0m1.165s 00:06:09.870 sys 0m0.125s 00:06:09.870 22:57:01 event.event_reactor -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:09.870 22:57:01 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:09.870 ************************************ 00:06:09.870 END TEST event_reactor 00:06:09.870 ************************************ 00:06:09.870 22:57:01 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:09.870 22:57:01 event -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:06:09.870 22:57:01 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:09.870 22:57:01 event -- common/autotest_common.sh@10 -- # set +x 00:06:09.870 ************************************ 00:06:09.870 START TEST event_reactor_perf 00:06:09.870 ************************************ 00:06:09.870 22:57:01 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:09.870 [2024-06-07 22:57:01.921170] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:06:09.870 [2024-06-07 22:57:01.921251] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4132022 ] 00:06:09.870 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.870 [2024-06-07 22:57:02.034672] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.870 [2024-06-07 22:57:02.119276] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.247 test_start 00:06:11.247 test_end 00:06:11.247 Performance: 665910 events per second 00:06:11.247 00:06:11.247 real 0m1.290s 00:06:11.247 user 0m1.166s 00:06:11.247 sys 0m0.118s 00:06:11.247 22:57:03 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:11.247 22:57:03 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:11.247 ************************************ 00:06:11.247 END TEST event_reactor_perf 00:06:11.247 ************************************ 00:06:11.247 22:57:03 event -- event/event.sh@49 -- # uname -s 00:06:11.247 22:57:03 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:11.247 22:57:03 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:11.247 22:57:03 event -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:11.247 22:57:03 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:11.247 22:57:03 event -- common/autotest_common.sh@10 -- # set +x 00:06:11.247 ************************************ 00:06:11.247 START TEST event_scheduler 00:06:11.247 ************************************ 00:06:11.247 22:57:03 event.event_scheduler -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:11.247 * Looking for test storage... 00:06:11.247 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler 00:06:11.247 22:57:03 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:11.247 22:57:03 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=4132330 00:06:11.247 22:57:03 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:11.247 22:57:03 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:11.247 22:57:03 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 4132330 00:06:11.247 22:57:03 event.event_scheduler -- common/autotest_common.sh@830 -- # '[' -z 4132330 ']' 00:06:11.247 22:57:03 event.event_scheduler -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.247 22:57:03 event.event_scheduler -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:11.247 22:57:03 event.event_scheduler -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.247 22:57:03 event.event_scheduler -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:11.247 22:57:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:11.247 [2024-06-07 22:57:03.418536] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:06:11.247 [2024-06-07 22:57:03.418610] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4132330 ] 00:06:11.247 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.247 [2024-06-07 22:57:03.510620] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:11.506 [2024-06-07 22:57:03.591469] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.506 [2024-06-07 22:57:03.591555] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:06:11.506 [2024-06-07 22:57:03.591666] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:06:11.506 [2024-06-07 22:57:03.591666] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:06:11.506 22:57:03 event.event_scheduler -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:11.506 22:57:03 event.event_scheduler -- common/autotest_common.sh@863 -- # return 0 00:06:11.506 22:57:03 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:11.506 22:57:03 event.event_scheduler -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:11.506 22:57:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:11.506 POWER: Env isn't set yet! 00:06:11.506 POWER: Attempting to initialise ACPI cpufreq power management... 00:06:11.506 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:11.506 POWER: Cannot set governor of lcore 0 to userspace 00:06:11.506 POWER: Attempting to initialise PSTAT power management... 00:06:11.506 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:06:11.506 POWER: Initialized successfully for lcore 0 power management 00:06:11.506 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:06:11.506 POWER: Initialized successfully for lcore 1 power management 00:06:11.506 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:06:11.506 POWER: Initialized successfully for lcore 2 power management 00:06:11.506 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:06:11.506 POWER: Initialized successfully for lcore 3 power management 00:06:11.506 [2024-06-07 22:57:03.672748] scheduler_dynamic.c: 382:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:11.506 [2024-06-07 22:57:03.672764] scheduler_dynamic.c: 384:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:11.506 [2024-06-07 22:57:03.672774] scheduler_dynamic.c: 386:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:11.506 22:57:03 event.event_scheduler -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:11.506 22:57:03 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:11.506 22:57:03 event.event_scheduler -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:11.506 22:57:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:11.506 [2024-06-07 22:57:03.744429] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:11.506 22:57:03 event.event_scheduler -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:11.506 22:57:03 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:11.506 22:57:03 event.event_scheduler -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:11.506 22:57:03 event.event_scheduler -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:11.506 22:57:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:11.765 ************************************ 00:06:11.765 START TEST scheduler_create_thread 00:06:11.765 ************************************ 00:06:11.765 22:57:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # scheduler_create_thread 00:06:11.765 22:57:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:11.765 22:57:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:11.765 22:57:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:11.765 2 00:06:11.765 22:57:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:11.765 22:57:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:11.765 22:57:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:11.765 22:57:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:11.765 3 00:06:11.765 22:57:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:11.765 22:57:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:11.765 22:57:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:11.765 22:57:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:11.765 4 00:06:11.765 22:57:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:11.765 22:57:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:11.765 22:57:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:11.765 22:57:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:11.765 5 00:06:11.765 22:57:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:11.765 22:57:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:11.765 22:57:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:11.765 22:57:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:11.765 6 00:06:11.765 22:57:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:11.765 22:57:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:11.765 22:57:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:11.765 22:57:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:11.765 7 00:06:11.765 22:57:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:11.765 22:57:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:11.765 22:57:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:11.765 22:57:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:11.765 8 00:06:11.765 22:57:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:11.765 22:57:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:11.765 22:57:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:11.765 22:57:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:11.765 9 00:06:11.765 22:57:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:11.765 22:57:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:11.765 22:57:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:11.765 22:57:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:11.765 10 00:06:11.765 22:57:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:11.765 22:57:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:11.765 22:57:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:11.765 22:57:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:11.765 22:57:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:11.765 22:57:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:11.765 22:57:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:11.765 22:57:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:11.765 22:57:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:12.705 22:57:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:12.705 22:57:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:12.705 22:57:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:12.705 22:57:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:14.082 22:57:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:14.082 22:57:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:14.082 22:57:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:14.082 22:57:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:14.082 22:57:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.019 22:57:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:15.019 00:06:15.019 real 0m3.380s 00:06:15.019 user 0m0.028s 00:06:15.019 sys 0m0.003s 00:06:15.019 22:57:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:15.019 22:57:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.019 ************************************ 00:06:15.019 END TEST scheduler_create_thread 00:06:15.019 ************************************ 00:06:15.019 22:57:07 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:15.019 22:57:07 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 4132330 00:06:15.019 22:57:07 event.event_scheduler -- common/autotest_common.sh@949 -- # '[' -z 4132330 ']' 00:06:15.019 22:57:07 event.event_scheduler -- common/autotest_common.sh@953 -- # kill -0 4132330 00:06:15.019 22:57:07 event.event_scheduler -- common/autotest_common.sh@954 -- # uname 00:06:15.019 22:57:07 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:15.019 22:57:07 event.event_scheduler -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4132330 00:06:15.019 22:57:07 event.event_scheduler -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:06:15.019 22:57:07 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:06:15.019 22:57:07 event.event_scheduler -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4132330' 00:06:15.019 killing process with pid 4132330 00:06:15.019 22:57:07 event.event_scheduler -- common/autotest_common.sh@968 -- # kill 4132330 00:06:15.019 22:57:07 event.event_scheduler -- common/autotest_common.sh@973 -- # wait 4132330 00:06:15.277 [2024-06-07 22:57:07.548665] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:15.537 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:06:15.537 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:06:15.537 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:06:15.537 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:06:15.537 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:06:15.537 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:06:15.537 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:06:15.537 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:06:15.537 00:06:15.537 real 0m4.488s 00:06:15.537 user 0m7.855s 00:06:15.537 sys 0m0.440s 00:06:15.537 22:57:07 event.event_scheduler -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:15.537 22:57:07 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:15.537 ************************************ 00:06:15.537 END TEST event_scheduler 00:06:15.537 ************************************ 00:06:15.796 22:57:07 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:15.796 22:57:07 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:15.796 22:57:07 event -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:15.796 22:57:07 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:15.796 22:57:07 event -- common/autotest_common.sh@10 -- # set +x 00:06:15.796 ************************************ 00:06:15.796 START TEST app_repeat 00:06:15.796 ************************************ 00:06:15.796 22:57:07 event.app_repeat -- common/autotest_common.sh@1124 -- # app_repeat_test 00:06:15.796 22:57:07 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.796 22:57:07 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.796 22:57:07 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:15.796 22:57:07 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:15.796 22:57:07 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:15.796 22:57:07 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:15.796 22:57:07 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:15.796 22:57:07 event.app_repeat -- event/event.sh@19 -- # repeat_pid=4133611 00:06:15.796 22:57:07 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:15.796 22:57:07 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:15.796 22:57:07 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 4133611' 00:06:15.796 Process app_repeat pid: 4133611 00:06:15.796 22:57:07 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:15.796 22:57:07 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:15.796 spdk_app_start Round 0 00:06:15.796 22:57:07 event.app_repeat -- event/event.sh@25 -- # waitforlisten 4133611 /var/tmp/spdk-nbd.sock 00:06:15.796 22:57:07 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 4133611 ']' 00:06:15.796 22:57:07 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:15.796 22:57:07 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:15.796 22:57:07 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:15.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:15.796 22:57:07 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:15.796 22:57:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:15.796 [2024-06-07 22:57:07.894746] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:06:15.796 [2024-06-07 22:57:07.894830] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4133611 ] 00:06:15.796 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.796 [2024-06-07 22:57:08.012993] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:16.055 [2024-06-07 22:57:08.098111] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:06:16.055 [2024-06-07 22:57:08.098116] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.622 22:57:08 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:16.622 22:57:08 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:06:16.622 22:57:08 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:16.880 Malloc0 00:06:16.880 22:57:09 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:17.139 Malloc1 00:06:17.139 22:57:09 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:17.139 22:57:09 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.139 22:57:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:17.139 22:57:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:17.139 22:57:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.139 22:57:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:17.139 22:57:09 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:17.139 22:57:09 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.139 22:57:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:17.139 22:57:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:17.139 22:57:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.139 22:57:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:17.139 22:57:09 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:17.139 22:57:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:17.139 22:57:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:17.139 22:57:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:17.398 /dev/nbd0 00:06:17.398 22:57:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:17.398 22:57:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:17.398 22:57:09 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:06:17.398 22:57:09 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:06:17.398 22:57:09 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:17.398 22:57:09 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:17.398 22:57:09 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:06:17.398 22:57:09 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:06:17.398 22:57:09 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:06:17.398 22:57:09 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:06:17.398 22:57:09 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:17.398 1+0 records in 00:06:17.398 1+0 records out 00:06:17.398 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000257783 s, 15.9 MB/s 00:06:17.398 22:57:09 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:17.398 22:57:09 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:06:17.398 22:57:09 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:17.398 22:57:09 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:06:17.398 22:57:09 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:06:17.398 22:57:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:17.398 22:57:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:17.398 22:57:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:17.657 /dev/nbd1 00:06:17.657 22:57:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:17.657 22:57:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:17.657 22:57:09 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:06:17.657 22:57:09 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:06:17.657 22:57:09 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:17.657 22:57:09 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:17.657 22:57:09 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:06:17.657 22:57:09 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:06:17.657 22:57:09 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:06:17.657 22:57:09 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:06:17.657 22:57:09 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:17.657 1+0 records in 00:06:17.657 1+0 records out 00:06:17.657 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000242913 s, 16.9 MB/s 00:06:17.657 22:57:09 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:17.657 22:57:09 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:06:17.657 22:57:09 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:17.657 22:57:09 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:06:17.657 22:57:09 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:06:17.657 22:57:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:17.657 22:57:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:17.657 22:57:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:17.657 22:57:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.657 22:57:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:17.657 22:57:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:17.657 { 00:06:17.657 "nbd_device": "/dev/nbd0", 00:06:17.657 "bdev_name": "Malloc0" 00:06:17.657 }, 00:06:17.657 { 00:06:17.657 "nbd_device": "/dev/nbd1", 00:06:17.657 "bdev_name": "Malloc1" 00:06:17.657 } 00:06:17.657 ]' 00:06:17.657 22:57:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:17.657 { 00:06:17.657 "nbd_device": "/dev/nbd0", 00:06:17.657 "bdev_name": "Malloc0" 00:06:17.657 }, 00:06:17.657 { 00:06:17.657 "nbd_device": "/dev/nbd1", 00:06:17.657 "bdev_name": "Malloc1" 00:06:17.657 } 00:06:17.657 ]' 00:06:17.657 22:57:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:17.916 22:57:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:17.916 /dev/nbd1' 00:06:17.916 22:57:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:17.916 /dev/nbd1' 00:06:17.916 22:57:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:17.916 22:57:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:17.916 22:57:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:17.916 22:57:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:17.916 22:57:09 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:17.916 22:57:09 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:17.916 22:57:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.916 22:57:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:17.916 22:57:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:17.916 22:57:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:17.916 22:57:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:17.916 22:57:09 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:17.916 256+0 records in 00:06:17.916 256+0 records out 00:06:17.916 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.010593 s, 99.0 MB/s 00:06:17.916 22:57:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:17.916 22:57:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:17.916 256+0 records in 00:06:17.916 256+0 records out 00:06:17.916 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0284207 s, 36.9 MB/s 00:06:17.916 22:57:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:17.916 22:57:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:17.916 256+0 records in 00:06:17.916 256+0 records out 00:06:17.916 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0269079 s, 39.0 MB/s 00:06:17.916 22:57:10 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:17.916 22:57:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.916 22:57:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:17.916 22:57:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:17.916 22:57:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:17.916 22:57:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:17.916 22:57:10 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:17.916 22:57:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:17.916 22:57:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:17.916 22:57:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:17.916 22:57:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:17.916 22:57:10 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:17.916 22:57:10 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:17.916 22:57:10 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.916 22:57:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.916 22:57:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:17.916 22:57:10 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:17.916 22:57:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:17.916 22:57:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:18.175 22:57:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:18.175 22:57:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:18.175 22:57:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:18.175 22:57:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:18.175 22:57:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:18.175 22:57:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:18.175 22:57:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:18.175 22:57:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:18.175 22:57:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:18.175 22:57:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:18.433 22:57:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:18.433 22:57:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:18.433 22:57:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:18.433 22:57:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:18.433 22:57:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:18.433 22:57:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:18.433 22:57:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:18.433 22:57:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:18.433 22:57:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:18.433 22:57:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.433 22:57:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:18.433 22:57:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:18.433 22:57:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:18.433 22:57:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:18.692 22:57:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:18.692 22:57:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:18.692 22:57:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:18.692 22:57:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:18.692 22:57:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:18.692 22:57:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:18.692 22:57:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:18.692 22:57:10 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:18.692 22:57:10 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:18.692 22:57:10 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:18.957 22:57:10 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:18.957 [2024-06-07 22:57:11.206527] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:19.216 [2024-06-07 22:57:11.287232] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:06:19.216 [2024-06-07 22:57:11.287237] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.216 [2024-06-07 22:57:11.330154] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:19.217 [2024-06-07 22:57:11.330216] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:21.753 22:57:13 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:21.753 22:57:13 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:21.753 spdk_app_start Round 1 00:06:21.753 22:57:13 event.app_repeat -- event/event.sh@25 -- # waitforlisten 4133611 /var/tmp/spdk-nbd.sock 00:06:21.753 22:57:13 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 4133611 ']' 00:06:21.753 22:57:13 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:21.753 22:57:13 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:21.753 22:57:13 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:21.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:21.753 22:57:13 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:21.753 22:57:13 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:22.012 22:57:14 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:22.012 22:57:14 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:06:22.012 22:57:14 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:22.272 Malloc0 00:06:22.272 22:57:14 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:22.531 Malloc1 00:06:22.531 22:57:14 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:22.531 22:57:14 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.531 22:57:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:22.531 22:57:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:22.531 22:57:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:22.531 22:57:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:22.531 22:57:14 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:22.531 22:57:14 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.531 22:57:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:22.531 22:57:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:22.531 22:57:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:22.531 22:57:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:22.531 22:57:14 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:22.531 22:57:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:22.531 22:57:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:22.531 22:57:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:22.790 /dev/nbd0 00:06:22.790 22:57:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:22.790 22:57:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:22.790 22:57:14 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:06:22.790 22:57:14 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:06:22.790 22:57:14 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:22.790 22:57:14 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:22.790 22:57:14 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:06:22.790 22:57:14 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:06:22.790 22:57:14 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:06:22.790 22:57:14 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:06:22.790 22:57:14 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:22.790 1+0 records in 00:06:22.790 1+0 records out 00:06:22.790 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000236987 s, 17.3 MB/s 00:06:22.790 22:57:14 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:22.790 22:57:14 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:06:22.790 22:57:14 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:22.790 22:57:14 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:06:22.790 22:57:14 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:06:22.790 22:57:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:22.790 22:57:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:22.790 22:57:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:23.049 /dev/nbd1 00:06:23.049 22:57:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:23.049 22:57:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:23.049 22:57:15 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:06:23.049 22:57:15 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:06:23.049 22:57:15 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:23.049 22:57:15 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:23.049 22:57:15 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:06:23.049 22:57:15 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:06:23.049 22:57:15 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:06:23.049 22:57:15 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:06:23.049 22:57:15 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:23.049 1+0 records in 00:06:23.049 1+0 records out 00:06:23.049 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000276744 s, 14.8 MB/s 00:06:23.049 22:57:15 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:23.049 22:57:15 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:06:23.049 22:57:15 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:23.050 22:57:15 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:06:23.050 22:57:15 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:06:23.050 22:57:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:23.050 22:57:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:23.050 22:57:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:23.050 22:57:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.050 22:57:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:23.308 22:57:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:23.308 { 00:06:23.308 "nbd_device": "/dev/nbd0", 00:06:23.308 "bdev_name": "Malloc0" 00:06:23.308 }, 00:06:23.308 { 00:06:23.308 "nbd_device": "/dev/nbd1", 00:06:23.308 "bdev_name": "Malloc1" 00:06:23.308 } 00:06:23.308 ]' 00:06:23.308 22:57:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:23.308 { 00:06:23.308 "nbd_device": "/dev/nbd0", 00:06:23.308 "bdev_name": "Malloc0" 00:06:23.308 }, 00:06:23.308 { 00:06:23.308 "nbd_device": "/dev/nbd1", 00:06:23.308 "bdev_name": "Malloc1" 00:06:23.308 } 00:06:23.308 ]' 00:06:23.308 22:57:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:23.308 22:57:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:23.308 /dev/nbd1' 00:06:23.308 22:57:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:23.308 /dev/nbd1' 00:06:23.308 22:57:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:23.308 22:57:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:23.308 22:57:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:23.308 22:57:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:23.308 22:57:15 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:23.308 22:57:15 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:23.308 22:57:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.308 22:57:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:23.308 22:57:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:23.308 22:57:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:23.308 22:57:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:23.308 22:57:15 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:23.308 256+0 records in 00:06:23.308 256+0 records out 00:06:23.308 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0112267 s, 93.4 MB/s 00:06:23.308 22:57:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:23.308 22:57:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:23.308 256+0 records in 00:06:23.308 256+0 records out 00:06:23.308 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0218464 s, 48.0 MB/s 00:06:23.308 22:57:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:23.308 22:57:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:23.567 256+0 records in 00:06:23.567 256+0 records out 00:06:23.567 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0244105 s, 43.0 MB/s 00:06:23.567 22:57:15 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:23.567 22:57:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.567 22:57:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:23.567 22:57:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:23.567 22:57:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:23.567 22:57:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:23.567 22:57:15 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:23.567 22:57:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:23.567 22:57:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:23.567 22:57:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:23.567 22:57:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:23.567 22:57:15 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:23.567 22:57:15 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:23.567 22:57:15 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.567 22:57:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.567 22:57:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:23.567 22:57:15 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:23.567 22:57:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:23.567 22:57:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:23.826 22:57:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:23.826 22:57:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:23.826 22:57:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:23.826 22:57:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:23.826 22:57:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:23.826 22:57:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:23.826 22:57:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:23.826 22:57:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:23.826 22:57:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:23.826 22:57:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:24.085 22:57:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:24.085 22:57:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:24.085 22:57:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:24.085 22:57:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:24.085 22:57:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:24.085 22:57:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:24.085 22:57:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:24.085 22:57:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:24.085 22:57:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:24.085 22:57:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.085 22:57:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:24.085 22:57:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:24.085 22:57:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:24.085 22:57:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:24.344 22:57:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:24.344 22:57:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:24.344 22:57:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:24.344 22:57:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:24.344 22:57:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:24.344 22:57:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:24.344 22:57:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:24.344 22:57:16 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:24.344 22:57:16 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:24.344 22:57:16 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:24.605 22:57:16 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:24.605 [2024-06-07 22:57:16.875543] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:24.905 [2024-06-07 22:57:16.961319] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:06:24.905 [2024-06-07 22:57:16.961323] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.905 [2024-06-07 22:57:17.005529] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:24.905 [2024-06-07 22:57:17.005588] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:27.469 22:57:19 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:27.469 22:57:19 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:27.469 spdk_app_start Round 2 00:06:27.469 22:57:19 event.app_repeat -- event/event.sh@25 -- # waitforlisten 4133611 /var/tmp/spdk-nbd.sock 00:06:27.469 22:57:19 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 4133611 ']' 00:06:27.469 22:57:19 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:27.469 22:57:19 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:27.469 22:57:19 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:27.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:27.469 22:57:19 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:27.469 22:57:19 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:27.728 22:57:19 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:27.728 22:57:19 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:06:27.728 22:57:19 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:27.987 Malloc0 00:06:27.987 22:57:20 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:28.246 Malloc1 00:06:28.246 22:57:20 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:28.246 22:57:20 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.246 22:57:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:28.246 22:57:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:28.246 22:57:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.246 22:57:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:28.246 22:57:20 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:28.246 22:57:20 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.246 22:57:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:28.246 22:57:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:28.246 22:57:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.246 22:57:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:28.246 22:57:20 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:28.246 22:57:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:28.246 22:57:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:28.246 22:57:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:28.505 /dev/nbd0 00:06:28.505 22:57:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:28.505 22:57:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:28.505 22:57:20 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:06:28.505 22:57:20 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:06:28.505 22:57:20 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:28.505 22:57:20 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:28.505 22:57:20 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:06:28.505 22:57:20 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:06:28.505 22:57:20 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:06:28.505 22:57:20 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:06:28.505 22:57:20 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:28.505 1+0 records in 00:06:28.505 1+0 records out 00:06:28.505 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000256776 s, 16.0 MB/s 00:06:28.505 22:57:20 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:28.505 22:57:20 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:06:28.505 22:57:20 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:28.505 22:57:20 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:06:28.505 22:57:20 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:06:28.505 22:57:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:28.506 22:57:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:28.506 22:57:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:28.765 /dev/nbd1 00:06:28.765 22:57:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:28.765 22:57:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:28.765 22:57:20 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:06:28.765 22:57:20 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:06:28.765 22:57:20 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:28.765 22:57:20 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:28.765 22:57:20 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:06:28.765 22:57:20 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:06:28.765 22:57:20 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:06:28.765 22:57:20 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:06:28.765 22:57:20 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:28.765 1+0 records in 00:06:28.765 1+0 records out 00:06:28.765 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000206009 s, 19.9 MB/s 00:06:28.765 22:57:20 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:28.765 22:57:20 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:06:28.765 22:57:20 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:28.765 22:57:20 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:06:28.765 22:57:20 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:06:28.765 22:57:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:28.765 22:57:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:28.765 22:57:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:28.765 22:57:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.765 22:57:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:29.025 22:57:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:29.025 { 00:06:29.025 "nbd_device": "/dev/nbd0", 00:06:29.025 "bdev_name": "Malloc0" 00:06:29.025 }, 00:06:29.025 { 00:06:29.025 "nbd_device": "/dev/nbd1", 00:06:29.025 "bdev_name": "Malloc1" 00:06:29.025 } 00:06:29.025 ]' 00:06:29.025 22:57:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:29.025 { 00:06:29.025 "nbd_device": "/dev/nbd0", 00:06:29.025 "bdev_name": "Malloc0" 00:06:29.025 }, 00:06:29.025 { 00:06:29.025 "nbd_device": "/dev/nbd1", 00:06:29.025 "bdev_name": "Malloc1" 00:06:29.025 } 00:06:29.025 ]' 00:06:29.025 22:57:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:29.025 22:57:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:29.025 /dev/nbd1' 00:06:29.025 22:57:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:29.025 /dev/nbd1' 00:06:29.025 22:57:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:29.025 22:57:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:29.025 22:57:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:29.025 22:57:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:29.025 22:57:21 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:29.025 22:57:21 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:29.025 22:57:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:29.025 22:57:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:29.025 22:57:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:29.025 22:57:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:29.025 22:57:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:29.025 22:57:21 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:29.025 256+0 records in 00:06:29.025 256+0 records out 00:06:29.025 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0104065 s, 101 MB/s 00:06:29.025 22:57:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:29.025 22:57:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:29.025 256+0 records in 00:06:29.025 256+0 records out 00:06:29.025 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0282717 s, 37.1 MB/s 00:06:29.025 22:57:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:29.025 22:57:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:29.025 256+0 records in 00:06:29.025 256+0 records out 00:06:29.025 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0231969 s, 45.2 MB/s 00:06:29.025 22:57:21 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:29.025 22:57:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:29.025 22:57:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:29.025 22:57:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:29.025 22:57:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:29.025 22:57:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:29.025 22:57:21 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:29.025 22:57:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:29.025 22:57:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:29.025 22:57:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:29.025 22:57:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:29.025 22:57:21 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:29.025 22:57:21 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:29.025 22:57:21 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.025 22:57:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:29.025 22:57:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:29.025 22:57:21 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:29.025 22:57:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:29.025 22:57:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:29.284 22:57:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:29.284 22:57:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:29.284 22:57:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:29.284 22:57:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:29.284 22:57:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:29.284 22:57:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:29.284 22:57:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:29.284 22:57:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:29.284 22:57:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:29.284 22:57:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:29.543 22:57:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:29.543 22:57:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:29.543 22:57:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:29.543 22:57:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:29.543 22:57:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:29.543 22:57:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:29.543 22:57:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:29.543 22:57:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:29.543 22:57:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:29.543 22:57:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.543 22:57:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:29.802 22:57:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:29.802 22:57:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:29.802 22:57:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:29.802 22:57:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:29.802 22:57:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:29.802 22:57:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:29.802 22:57:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:29.802 22:57:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:29.802 22:57:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:29.802 22:57:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:29.802 22:57:22 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:29.802 22:57:22 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:29.802 22:57:22 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:30.062 22:57:22 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:30.321 [2024-06-07 22:57:22.526384] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:30.580 [2024-06-07 22:57:22.606818] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:06:30.580 [2024-06-07 22:57:22.606821] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.580 [2024-06-07 22:57:22.649876] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:30.580 [2024-06-07 22:57:22.649928] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:33.117 22:57:25 event.app_repeat -- event/event.sh@38 -- # waitforlisten 4133611 /var/tmp/spdk-nbd.sock 00:06:33.117 22:57:25 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 4133611 ']' 00:06:33.117 22:57:25 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:33.117 22:57:25 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:33.117 22:57:25 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:33.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:33.117 22:57:25 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:33.117 22:57:25 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:33.377 22:57:25 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:33.377 22:57:25 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:06:33.377 22:57:25 event.app_repeat -- event/event.sh@39 -- # killprocess 4133611 00:06:33.377 22:57:25 event.app_repeat -- common/autotest_common.sh@949 -- # '[' -z 4133611 ']' 00:06:33.377 22:57:25 event.app_repeat -- common/autotest_common.sh@953 -- # kill -0 4133611 00:06:33.377 22:57:25 event.app_repeat -- common/autotest_common.sh@954 -- # uname 00:06:33.377 22:57:25 event.app_repeat -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:33.377 22:57:25 event.app_repeat -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4133611 00:06:33.377 22:57:25 event.app_repeat -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:33.377 22:57:25 event.app_repeat -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:33.377 22:57:25 event.app_repeat -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4133611' 00:06:33.377 killing process with pid 4133611 00:06:33.377 22:57:25 event.app_repeat -- common/autotest_common.sh@968 -- # kill 4133611 00:06:33.377 22:57:25 event.app_repeat -- common/autotest_common.sh@973 -- # wait 4133611 00:06:33.636 spdk_app_start is called in Round 0. 00:06:33.636 Shutdown signal received, stop current app iteration 00:06:33.636 Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 reinitialization... 00:06:33.636 spdk_app_start is called in Round 1. 00:06:33.636 Shutdown signal received, stop current app iteration 00:06:33.636 Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 reinitialization... 00:06:33.636 spdk_app_start is called in Round 2. 00:06:33.636 Shutdown signal received, stop current app iteration 00:06:33.636 Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 reinitialization... 00:06:33.636 spdk_app_start is called in Round 3. 00:06:33.636 Shutdown signal received, stop current app iteration 00:06:33.636 22:57:25 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:33.636 22:57:25 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:33.636 00:06:33.636 real 0m17.900s 00:06:33.636 user 0m38.481s 00:06:33.636 sys 0m3.720s 00:06:33.636 22:57:25 event.app_repeat -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:33.636 22:57:25 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:33.636 ************************************ 00:06:33.636 END TEST app_repeat 00:06:33.636 ************************************ 00:06:33.636 22:57:25 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:33.636 22:57:25 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:33.636 22:57:25 event -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:33.636 22:57:25 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:33.636 22:57:25 event -- common/autotest_common.sh@10 -- # set +x 00:06:33.636 ************************************ 00:06:33.636 START TEST cpu_locks 00:06:33.636 ************************************ 00:06:33.636 22:57:25 event.cpu_locks -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:33.895 * Looking for test storage... 00:06:33.895 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:06:33.895 22:57:25 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:33.895 22:57:25 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:33.895 22:57:25 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:33.895 22:57:25 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:33.895 22:57:25 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:33.895 22:57:25 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:33.895 22:57:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:33.895 ************************************ 00:06:33.895 START TEST default_locks 00:06:33.895 ************************************ 00:06:33.895 22:57:25 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # default_locks 00:06:33.895 22:57:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=4136808 00:06:33.895 22:57:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 4136808 00:06:33.895 22:57:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:33.895 22:57:25 event.cpu_locks.default_locks -- common/autotest_common.sh@830 -- # '[' -z 4136808 ']' 00:06:33.895 22:57:25 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.895 22:57:25 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:33.896 22:57:25 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.896 22:57:25 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:33.896 22:57:25 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:33.896 [2024-06-07 22:57:26.005315] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:06:33.896 [2024-06-07 22:57:26.005383] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4136808 ] 00:06:33.896 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.896 [2024-06-07 22:57:26.121817] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.155 [2024-06-07 22:57:26.212453] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.723 22:57:26 event.cpu_locks.default_locks -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:34.723 22:57:26 event.cpu_locks.default_locks -- common/autotest_common.sh@863 -- # return 0 00:06:34.723 22:57:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 4136808 00:06:34.723 22:57:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 4136808 00:06:34.723 22:57:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:35.292 lslocks: write error 00:06:35.292 22:57:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 4136808 00:06:35.292 22:57:27 event.cpu_locks.default_locks -- common/autotest_common.sh@949 -- # '[' -z 4136808 ']' 00:06:35.292 22:57:27 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # kill -0 4136808 00:06:35.292 22:57:27 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # uname 00:06:35.292 22:57:27 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:35.292 22:57:27 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4136808 00:06:35.292 22:57:27 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:35.292 22:57:27 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:35.292 22:57:27 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4136808' 00:06:35.292 killing process with pid 4136808 00:06:35.292 22:57:27 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # kill 4136808 00:06:35.292 22:57:27 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # wait 4136808 00:06:35.861 22:57:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 4136808 00:06:35.861 22:57:27 event.cpu_locks.default_locks -- common/autotest_common.sh@649 -- # local es=0 00:06:35.861 22:57:27 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 4136808 00:06:35.861 22:57:27 event.cpu_locks.default_locks -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:06:35.862 22:57:27 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:35.862 22:57:27 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:06:35.862 22:57:27 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:35.862 22:57:27 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # waitforlisten 4136808 00:06:35.862 22:57:27 event.cpu_locks.default_locks -- common/autotest_common.sh@830 -- # '[' -z 4136808 ']' 00:06:35.862 22:57:27 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.862 22:57:27 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:35.862 22:57:27 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.862 22:57:27 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:35.862 22:57:27 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:35.862 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 845: kill: (4136808) - No such process 00:06:35.862 ERROR: process (pid: 4136808) is no longer running 00:06:35.862 22:57:27 event.cpu_locks.default_locks -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:35.862 22:57:27 event.cpu_locks.default_locks -- common/autotest_common.sh@863 -- # return 1 00:06:35.862 22:57:27 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # es=1 00:06:35.862 22:57:27 event.cpu_locks.default_locks -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:35.862 22:57:27 event.cpu_locks.default_locks -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:35.862 22:57:27 event.cpu_locks.default_locks -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:35.862 22:57:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:35.862 22:57:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:35.862 22:57:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:35.862 22:57:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:35.862 00:06:35.862 real 0m1.870s 00:06:35.862 user 0m1.998s 00:06:35.862 sys 0m0.696s 00:06:35.862 22:57:27 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:35.862 22:57:27 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:35.862 ************************************ 00:06:35.862 END TEST default_locks 00:06:35.862 ************************************ 00:06:35.862 22:57:27 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:35.862 22:57:27 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:35.862 22:57:27 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:35.862 22:57:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:35.862 ************************************ 00:06:35.862 START TEST default_locks_via_rpc 00:06:35.862 ************************************ 00:06:35.862 22:57:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # default_locks_via_rpc 00:06:35.862 22:57:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=4137300 00:06:35.862 22:57:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 4137300 00:06:35.862 22:57:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:35.862 22:57:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 4137300 ']' 00:06:35.862 22:57:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.862 22:57:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:35.862 22:57:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.862 22:57:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:35.862 22:57:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:35.862 [2024-06-07 22:57:27.951926] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:06:35.862 [2024-06-07 22:57:27.952004] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4137300 ] 00:06:35.862 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.862 [2024-06-07 22:57:28.067984] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.122 [2024-06-07 22:57:28.162000] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.690 22:57:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:36.690 22:57:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:06:36.690 22:57:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:36.690 22:57:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:36.690 22:57:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:36.690 22:57:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:36.690 22:57:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:36.690 22:57:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:36.690 22:57:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:36.690 22:57:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:36.690 22:57:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:36.690 22:57:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:36.690 22:57:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:36.690 22:57:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:36.690 22:57:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 4137300 00:06:36.690 22:57:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 4137300 00:06:36.690 22:57:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:37.258 22:57:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 4137300 00:06:37.258 22:57:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@949 -- # '[' -z 4137300 ']' 00:06:37.258 22:57:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # kill -0 4137300 00:06:37.258 22:57:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # uname 00:06:37.258 22:57:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:37.258 22:57:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4137300 00:06:37.258 22:57:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:37.258 22:57:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:37.258 22:57:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4137300' 00:06:37.258 killing process with pid 4137300 00:06:37.258 22:57:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # kill 4137300 00:06:37.258 22:57:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # wait 4137300 00:06:37.827 00:06:37.827 real 0m1.905s 00:06:37.827 user 0m2.011s 00:06:37.827 sys 0m0.726s 00:06:37.827 22:57:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:37.827 22:57:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:37.827 ************************************ 00:06:37.827 END TEST default_locks_via_rpc 00:06:37.827 ************************************ 00:06:37.827 22:57:29 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:37.827 22:57:29 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:37.827 22:57:29 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:37.827 22:57:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:37.827 ************************************ 00:06:37.827 START TEST non_locking_app_on_locked_coremask 00:06:37.827 ************************************ 00:06:37.827 22:57:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # non_locking_app_on_locked_coremask 00:06:37.827 22:57:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=4137632 00:06:37.827 22:57:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 4137632 /var/tmp/spdk.sock 00:06:37.827 22:57:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:37.827 22:57:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 4137632 ']' 00:06:37.827 22:57:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.827 22:57:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:37.827 22:57:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.827 22:57:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:37.827 22:57:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:37.827 [2024-06-07 22:57:29.939240] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:06:37.827 [2024-06-07 22:57:29.939303] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4137632 ] 00:06:37.827 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.827 [2024-06-07 22:57:30.056922] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.086 [2024-06-07 22:57:30.150553] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.655 22:57:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:38.655 22:57:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 0 00:06:38.655 22:57:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=4137782 00:06:38.655 22:57:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 4137782 /var/tmp/spdk2.sock 00:06:38.655 22:57:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:38.655 22:57:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 4137782 ']' 00:06:38.655 22:57:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:38.655 22:57:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:38.655 22:57:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:38.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:38.655 22:57:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:38.655 22:57:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:38.655 [2024-06-07 22:57:30.896305] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:06:38.655 [2024-06-07 22:57:30.896390] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4137782 ] 00:06:38.914 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.914 [2024-06-07 22:57:31.053440] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:38.914 [2024-06-07 22:57:31.053474] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.173 [2024-06-07 22:57:31.231155] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.742 22:57:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:39.742 22:57:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 0 00:06:39.742 22:57:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 4137632 00:06:39.742 22:57:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 4137632 00:06:39.742 22:57:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:41.120 lslocks: write error 00:06:41.120 22:57:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 4137632 00:06:41.120 22:57:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@949 -- # '[' -z 4137632 ']' 00:06:41.120 22:57:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # kill -0 4137632 00:06:41.120 22:57:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # uname 00:06:41.120 22:57:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:41.120 22:57:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4137632 00:06:41.120 22:57:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:41.120 22:57:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:41.120 22:57:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4137632' 00:06:41.120 killing process with pid 4137632 00:06:41.120 22:57:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # kill 4137632 00:06:41.120 22:57:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # wait 4137632 00:06:41.689 22:57:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 4137782 00:06:41.689 22:57:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@949 -- # '[' -z 4137782 ']' 00:06:41.689 22:57:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # kill -0 4137782 00:06:41.689 22:57:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # uname 00:06:41.689 22:57:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:41.689 22:57:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4137782 00:06:41.689 22:57:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:41.689 22:57:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:41.689 22:57:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4137782' 00:06:41.689 killing process with pid 4137782 00:06:41.689 22:57:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # kill 4137782 00:06:41.689 22:57:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # wait 4137782 00:06:41.948 00:06:41.948 real 0m4.152s 00:06:41.948 user 0m4.516s 00:06:41.948 sys 0m1.392s 00:06:41.948 22:57:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:41.948 22:57:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:41.948 ************************************ 00:06:41.948 END TEST non_locking_app_on_locked_coremask 00:06:41.948 ************************************ 00:06:41.948 22:57:34 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:41.948 22:57:34 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:41.948 22:57:34 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:41.948 22:57:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:41.948 ************************************ 00:06:41.948 START TEST locking_app_on_unlocked_coremask 00:06:41.948 ************************************ 00:06:41.948 22:57:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # locking_app_on_unlocked_coremask 00:06:41.948 22:57:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=4138435 00:06:41.948 22:57:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 4138435 /var/tmp/spdk.sock 00:06:41.948 22:57:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:41.948 22:57:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@830 -- # '[' -z 4138435 ']' 00:06:41.948 22:57:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.948 22:57:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:41.948 22:57:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.948 22:57:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:41.948 22:57:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:41.949 [2024-06-07 22:57:34.174268] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:06:41.949 [2024-06-07 22:57:34.174328] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4138435 ] 00:06:42.208 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.208 [2024-06-07 22:57:34.290993] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:42.208 [2024-06-07 22:57:34.291024] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.208 [2024-06-07 22:57:34.375603] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.145 22:57:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:43.145 22:57:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@863 -- # return 0 00:06:43.145 22:57:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=4138482 00:06:43.145 22:57:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 4138482 /var/tmp/spdk2.sock 00:06:43.145 22:57:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:43.145 22:57:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@830 -- # '[' -z 4138482 ']' 00:06:43.145 22:57:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:43.145 22:57:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:43.145 22:57:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:43.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:43.145 22:57:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:43.145 22:57:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:43.145 [2024-06-07 22:57:35.129959] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:06:43.145 [2024-06-07 22:57:35.130048] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4138482 ] 00:06:43.145 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.145 [2024-06-07 22:57:35.284206] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.404 [2024-06-07 22:57:35.459583] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.971 22:57:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:43.971 22:57:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@863 -- # return 0 00:06:43.971 22:57:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 4138482 00:06:43.971 22:57:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 4138482 00:06:43.971 22:57:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:45.345 lslocks: write error 00:06:45.345 22:57:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 4138435 00:06:45.345 22:57:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@949 -- # '[' -z 4138435 ']' 00:06:45.345 22:57:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # kill -0 4138435 00:06:45.345 22:57:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # uname 00:06:45.345 22:57:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:45.345 22:57:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4138435 00:06:45.345 22:57:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:45.345 22:57:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:45.345 22:57:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4138435' 00:06:45.345 killing process with pid 4138435 00:06:45.345 22:57:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # kill 4138435 00:06:45.345 22:57:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # wait 4138435 00:06:45.912 22:57:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 4138482 00:06:45.912 22:57:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@949 -- # '[' -z 4138482 ']' 00:06:45.912 22:57:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # kill -0 4138482 00:06:45.912 22:57:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # uname 00:06:45.912 22:57:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:45.912 22:57:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4138482 00:06:45.912 22:57:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:45.912 22:57:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:45.912 22:57:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4138482' 00:06:45.912 killing process with pid 4138482 00:06:45.912 22:57:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # kill 4138482 00:06:45.912 22:57:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # wait 4138482 00:06:46.170 00:06:46.170 real 0m4.184s 00:06:46.170 user 0m4.543s 00:06:46.170 sys 0m1.433s 00:06:46.170 22:57:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:46.170 22:57:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:46.170 ************************************ 00:06:46.170 END TEST locking_app_on_unlocked_coremask 00:06:46.170 ************************************ 00:06:46.170 22:57:38 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:46.170 22:57:38 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:46.170 22:57:38 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:46.170 22:57:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:46.170 ************************************ 00:06:46.170 START TEST locking_app_on_locked_coremask 00:06:46.170 ************************************ 00:06:46.170 22:57:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # locking_app_on_locked_coremask 00:06:46.170 22:57:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=4139087 00:06:46.170 22:57:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 4139087 /var/tmp/spdk.sock 00:06:46.170 22:57:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:46.170 22:57:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 4139087 ']' 00:06:46.170 22:57:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.170 22:57:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:46.170 22:57:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.171 22:57:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:46.171 22:57:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:46.171 [2024-06-07 22:57:38.439099] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:06:46.171 [2024-06-07 22:57:38.439157] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4139087 ] 00:06:46.429 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.429 [2024-06-07 22:57:38.552167] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.429 [2024-06-07 22:57:38.643916] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.365 22:57:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:47.365 22:57:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 0 00:06:47.365 22:57:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=4139318 00:06:47.365 22:57:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 4139318 /var/tmp/spdk2.sock 00:06:47.365 22:57:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:47.365 22:57:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@649 -- # local es=0 00:06:47.365 22:57:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 4139318 /var/tmp/spdk2.sock 00:06:47.365 22:57:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:06:47.365 22:57:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:47.365 22:57:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:06:47.365 22:57:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:47.365 22:57:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # waitforlisten 4139318 /var/tmp/spdk2.sock 00:06:47.365 22:57:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 4139318 ']' 00:06:47.365 22:57:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:47.365 22:57:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:47.365 22:57:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:47.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:47.365 22:57:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:47.365 22:57:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:47.365 [2024-06-07 22:57:39.398092] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:06:47.365 [2024-06-07 22:57:39.398178] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4139318 ] 00:06:47.365 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.365 [2024-06-07 22:57:39.552720] app.c: 772:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 4139087 has claimed it. 00:06:47.365 [2024-06-07 22:57:39.552767] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:47.933 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 845: kill: (4139318) - No such process 00:06:47.933 ERROR: process (pid: 4139318) is no longer running 00:06:47.933 22:57:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:47.933 22:57:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 1 00:06:47.933 22:57:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # es=1 00:06:47.933 22:57:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:47.933 22:57:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:47.933 22:57:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:47.933 22:57:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 4139087 00:06:47.933 22:57:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 4139087 00:06:47.933 22:57:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:48.501 lslocks: write error 00:06:48.501 22:57:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 4139087 00:06:48.501 22:57:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@949 -- # '[' -z 4139087 ']' 00:06:48.501 22:57:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # kill -0 4139087 00:06:48.501 22:57:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # uname 00:06:48.501 22:57:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:48.502 22:57:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4139087 00:06:48.761 22:57:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:48.761 22:57:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:48.761 22:57:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4139087' 00:06:48.761 killing process with pid 4139087 00:06:48.761 22:57:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # kill 4139087 00:06:48.761 22:57:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # wait 4139087 00:06:49.020 00:06:49.020 real 0m2.704s 00:06:49.020 user 0m3.010s 00:06:49.020 sys 0m0.908s 00:06:49.020 22:57:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:49.020 22:57:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:49.020 ************************************ 00:06:49.020 END TEST locking_app_on_locked_coremask 00:06:49.020 ************************************ 00:06:49.020 22:57:41 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:49.020 22:57:41 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:49.020 22:57:41 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:49.020 22:57:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:49.020 ************************************ 00:06:49.020 START TEST locking_overlapped_coremask 00:06:49.020 ************************************ 00:06:49.020 22:57:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # locking_overlapped_coremask 00:06:49.021 22:57:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=4139615 00:06:49.021 22:57:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 4139615 /var/tmp/spdk.sock 00:06:49.021 22:57:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:49.021 22:57:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@830 -- # '[' -z 4139615 ']' 00:06:49.021 22:57:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.021 22:57:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:49.021 22:57:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.021 22:57:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:49.021 22:57:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:49.021 [2024-06-07 22:57:41.221800] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:06:49.021 [2024-06-07 22:57:41.221870] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4139615 ] 00:06:49.021 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.280 [2024-06-07 22:57:41.337615] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:49.280 [2024-06-07 22:57:41.431030] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:06:49.280 [2024-06-07 22:57:41.431124] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:06:49.280 [2024-06-07 22:57:41.431128] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.215 22:57:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:50.215 22:57:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@863 -- # return 0 00:06:50.215 22:57:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:50.215 22:57:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=4139881 00:06:50.215 22:57:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 4139881 /var/tmp/spdk2.sock 00:06:50.215 22:57:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@649 -- # local es=0 00:06:50.215 22:57:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 4139881 /var/tmp/spdk2.sock 00:06:50.215 22:57:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:06:50.215 22:57:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:50.215 22:57:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:06:50.215 22:57:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:50.215 22:57:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # waitforlisten 4139881 /var/tmp/spdk2.sock 00:06:50.215 22:57:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@830 -- # '[' -z 4139881 ']' 00:06:50.215 22:57:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:50.215 22:57:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:50.215 22:57:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:50.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:50.215 22:57:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:50.215 22:57:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:50.215 [2024-06-07 22:57:42.185080] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:06:50.215 [2024-06-07 22:57:42.185153] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4139881 ] 00:06:50.215 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.215 [2024-06-07 22:57:42.312685] app.c: 772:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 4139615 has claimed it. 00:06:50.215 [2024-06-07 22:57:42.312734] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:50.791 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 845: kill: (4139881) - No such process 00:06:50.791 ERROR: process (pid: 4139881) is no longer running 00:06:50.791 22:57:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:50.791 22:57:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@863 -- # return 1 00:06:50.791 22:57:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # es=1 00:06:50.791 22:57:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:50.791 22:57:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:50.791 22:57:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:50.791 22:57:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:50.791 22:57:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:50.791 22:57:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:50.791 22:57:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:50.791 22:57:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 4139615 00:06:50.791 22:57:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@949 -- # '[' -z 4139615 ']' 00:06:50.791 22:57:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # kill -0 4139615 00:06:50.791 22:57:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # uname 00:06:50.791 22:57:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:50.791 22:57:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4139615 00:06:50.791 22:57:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:50.791 22:57:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:50.791 22:57:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4139615' 00:06:50.791 killing process with pid 4139615 00:06:50.791 22:57:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # kill 4139615 00:06:50.791 22:57:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # wait 4139615 00:06:51.166 00:06:51.166 real 0m2.091s 00:06:51.166 user 0m5.865s 00:06:51.166 sys 0m0.548s 00:06:51.166 22:57:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:51.166 22:57:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:51.166 ************************************ 00:06:51.166 END TEST locking_overlapped_coremask 00:06:51.166 ************************************ 00:06:51.166 22:57:43 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:51.166 22:57:43 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:51.166 22:57:43 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:51.166 22:57:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:51.166 ************************************ 00:06:51.166 START TEST locking_overlapped_coremask_via_rpc 00:06:51.166 ************************************ 00:06:51.166 22:57:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # locking_overlapped_coremask_via_rpc 00:06:51.166 22:57:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=4140152 00:06:51.166 22:57:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 4140152 /var/tmp/spdk.sock 00:06:51.166 22:57:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:51.166 22:57:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 4140152 ']' 00:06:51.166 22:57:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.166 22:57:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:51.166 22:57:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.166 22:57:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:51.166 22:57:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.166 [2024-06-07 22:57:43.395935] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:06:51.166 [2024-06-07 22:57:43.395994] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4140152 ] 00:06:51.425 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.425 [2024-06-07 22:57:43.511780] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:51.425 [2024-06-07 22:57:43.511814] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:51.425 [2024-06-07 22:57:43.598920] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:06:51.425 [2024-06-07 22:57:43.599014] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:06:51.425 [2024-06-07 22:57:43.599017] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.362 22:57:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:52.362 22:57:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:06:52.363 22:57:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:52.363 22:57:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=4140193 00:06:52.363 22:57:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 4140193 /var/tmp/spdk2.sock 00:06:52.363 22:57:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 4140193 ']' 00:06:52.363 22:57:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:52.363 22:57:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:52.363 22:57:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:52.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:52.363 22:57:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:52.363 22:57:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:52.363 [2024-06-07 22:57:44.293360] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:06:52.363 [2024-06-07 22:57:44.293434] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4140193 ] 00:06:52.363 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.363 [2024-06-07 22:57:44.422358] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:52.363 [2024-06-07 22:57:44.422388] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:52.363 [2024-06-07 22:57:44.574005] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:06:52.363 [2024-06-07 22:57:44.574099] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:06:52.363 [2024-06-07 22:57:44.574099] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:06:53.299 22:57:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:53.299 22:57:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:06:53.299 22:57:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:53.299 22:57:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:53.299 22:57:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.299 22:57:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:53.299 22:57:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:53.299 22:57:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@649 -- # local es=0 00:06:53.299 22:57:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:53.299 22:57:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:06:53.299 22:57:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:53.299 22:57:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:06:53.299 22:57:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:53.299 22:57:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:53.299 22:57:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:53.299 22:57:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.299 [2024-06-07 22:57:45.266637] app.c: 772:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 4140152 has claimed it. 00:06:53.299 request: 00:06:53.299 { 00:06:53.299 "method": "framework_enable_cpumask_locks", 00:06:53.299 "req_id": 1 00:06:53.299 } 00:06:53.299 Got JSON-RPC error response 00:06:53.299 response: 00:06:53.299 { 00:06:53.299 "code": -32603, 00:06:53.299 "message": "Failed to claim CPU core: 2" 00:06:53.299 } 00:06:53.299 22:57:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:06:53.299 22:57:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # es=1 00:06:53.299 22:57:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:53.299 22:57:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:53.299 22:57:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:53.299 22:57:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 4140152 /var/tmp/spdk.sock 00:06:53.299 22:57:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 4140152 ']' 00:06:53.299 22:57:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.299 22:57:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:53.299 22:57:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.299 22:57:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:53.299 22:57:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.299 22:57:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:53.299 22:57:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:06:53.299 22:57:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 4140193 /var/tmp/spdk2.sock 00:06:53.299 22:57:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 4140193 ']' 00:06:53.299 22:57:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:53.299 22:57:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:53.299 22:57:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:53.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:53.299 22:57:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:53.299 22:57:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.559 22:57:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:53.559 22:57:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:06:53.559 22:57:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:53.559 22:57:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:53.559 22:57:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:53.559 22:57:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:53.559 00:06:53.559 real 0m2.394s 00:06:53.559 user 0m1.088s 00:06:53.559 sys 0m0.232s 00:06:53.559 22:57:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:53.559 22:57:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.559 ************************************ 00:06:53.559 END TEST locking_overlapped_coremask_via_rpc 00:06:53.559 ************************************ 00:06:53.559 22:57:45 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:53.559 22:57:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 4140152 ]] 00:06:53.559 22:57:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 4140152 00:06:53.559 22:57:45 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 4140152 ']' 00:06:53.559 22:57:45 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 4140152 00:06:53.559 22:57:45 event.cpu_locks -- common/autotest_common.sh@954 -- # uname 00:06:53.559 22:57:45 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:53.559 22:57:45 event.cpu_locks -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4140152 00:06:53.819 22:57:45 event.cpu_locks -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:53.819 22:57:45 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:53.819 22:57:45 event.cpu_locks -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4140152' 00:06:53.819 killing process with pid 4140152 00:06:53.819 22:57:45 event.cpu_locks -- common/autotest_common.sh@968 -- # kill 4140152 00:06:53.819 22:57:45 event.cpu_locks -- common/autotest_common.sh@973 -- # wait 4140152 00:06:54.078 22:57:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 4140193 ]] 00:06:54.078 22:57:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 4140193 00:06:54.078 22:57:46 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 4140193 ']' 00:06:54.078 22:57:46 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 4140193 00:06:54.078 22:57:46 event.cpu_locks -- common/autotest_common.sh@954 -- # uname 00:06:54.078 22:57:46 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:54.078 22:57:46 event.cpu_locks -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4140193 00:06:54.078 22:57:46 event.cpu_locks -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:06:54.078 22:57:46 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:06:54.078 22:57:46 event.cpu_locks -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4140193' 00:06:54.078 killing process with pid 4140193 00:06:54.078 22:57:46 event.cpu_locks -- common/autotest_common.sh@968 -- # kill 4140193 00:06:54.078 22:57:46 event.cpu_locks -- common/autotest_common.sh@973 -- # wait 4140193 00:06:54.337 22:57:46 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:54.337 22:57:46 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:54.337 22:57:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 4140152 ]] 00:06:54.337 22:57:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 4140152 00:06:54.337 22:57:46 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 4140152 ']' 00:06:54.337 22:57:46 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 4140152 00:06:54.337 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 953: kill: (4140152) - No such process 00:06:54.337 22:57:46 event.cpu_locks -- common/autotest_common.sh@976 -- # echo 'Process with pid 4140152 is not found' 00:06:54.337 Process with pid 4140152 is not found 00:06:54.337 22:57:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 4140193 ]] 00:06:54.337 22:57:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 4140193 00:06:54.337 22:57:46 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 4140193 ']' 00:06:54.337 22:57:46 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 4140193 00:06:54.337 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 953: kill: (4140193) - No such process 00:06:54.337 22:57:46 event.cpu_locks -- common/autotest_common.sh@976 -- # echo 'Process with pid 4140193 is not found' 00:06:54.337 Process with pid 4140193 is not found 00:06:54.337 22:57:46 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:54.337 00:06:54.337 real 0m20.748s 00:06:54.337 user 0m34.965s 00:06:54.337 sys 0m7.033s 00:06:54.337 22:57:46 event.cpu_locks -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:54.337 22:57:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:54.337 ************************************ 00:06:54.337 END TEST cpu_locks 00:06:54.337 ************************************ 00:06:54.597 00:06:54.597 real 0m47.599s 00:06:54.597 user 1m28.006s 00:06:54.597 sys 0m11.976s 00:06:54.597 22:57:46 event -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:54.597 22:57:46 event -- common/autotest_common.sh@10 -- # set +x 00:06:54.597 ************************************ 00:06:54.597 END TEST event 00:06:54.597 ************************************ 00:06:54.597 22:57:46 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:06:54.597 22:57:46 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:54.597 22:57:46 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:54.597 22:57:46 -- common/autotest_common.sh@10 -- # set +x 00:06:54.597 ************************************ 00:06:54.597 START TEST thread 00:06:54.597 ************************************ 00:06:54.597 22:57:46 thread -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:06:54.597 * Looking for test storage... 00:06:54.597 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread 00:06:54.597 22:57:46 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:54.597 22:57:46 thread -- common/autotest_common.sh@1100 -- # '[' 8 -le 1 ']' 00:06:54.597 22:57:46 thread -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:54.597 22:57:46 thread -- common/autotest_common.sh@10 -- # set +x 00:06:54.597 ************************************ 00:06:54.597 START TEST thread_poller_perf 00:06:54.597 ************************************ 00:06:54.597 22:57:46 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:54.597 [2024-06-07 22:57:46.859552] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:06:54.597 [2024-06-07 22:57:46.859643] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4140823 ] 00:06:54.856 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.856 [2024-06-07 22:57:46.967151] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.856 [2024-06-07 22:57:47.053918] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.856 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:56.235 ====================================== 00:06:56.235 busy:2507149190 (cyc) 00:06:56.235 total_run_count: 589000 00:06:56.235 tsc_hz: 2500000000 (cyc) 00:06:56.235 ====================================== 00:06:56.235 poller_cost: 4256 (cyc), 1702 (nsec) 00:06:56.235 00:06:56.235 real 0m1.291s 00:06:56.235 user 0m1.162s 00:06:56.235 sys 0m0.120s 00:06:56.235 22:57:48 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:56.235 22:57:48 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:56.235 ************************************ 00:06:56.235 END TEST thread_poller_perf 00:06:56.235 ************************************ 00:06:56.235 22:57:48 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:56.235 22:57:48 thread -- common/autotest_common.sh@1100 -- # '[' 8 -le 1 ']' 00:06:56.235 22:57:48 thread -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:56.235 22:57:48 thread -- common/autotest_common.sh@10 -- # set +x 00:06:56.235 ************************************ 00:06:56.235 START TEST thread_poller_perf 00:06:56.235 ************************************ 00:06:56.235 22:57:48 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:56.235 [2024-06-07 22:57:48.223113] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:06:56.235 [2024-06-07 22:57:48.223171] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4141104 ] 00:06:56.235 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.235 [2024-06-07 22:57:48.337414] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.235 [2024-06-07 22:57:48.422893] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.235 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:57.614 ====================================== 00:06:57.614 busy:2501929818 (cyc) 00:06:57.614 total_run_count: 9331000 00:06:57.614 tsc_hz: 2500000000 (cyc) 00:06:57.614 ====================================== 00:06:57.614 poller_cost: 268 (cyc), 107 (nsec) 00:06:57.614 00:06:57.614 real 0m1.288s 00:06:57.614 user 0m1.161s 00:06:57.614 sys 0m0.122s 00:06:57.614 22:57:49 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:57.614 22:57:49 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:57.614 ************************************ 00:06:57.614 END TEST thread_poller_perf 00:06:57.614 ************************************ 00:06:57.614 22:57:49 thread -- thread/thread.sh@17 -- # [[ n != \y ]] 00:06:57.614 22:57:49 thread -- thread/thread.sh@18 -- # run_test thread_spdk_lock /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:06:57.614 22:57:49 thread -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:57.614 22:57:49 thread -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:57.614 22:57:49 thread -- common/autotest_common.sh@10 -- # set +x 00:06:57.614 ************************************ 00:06:57.614 START TEST thread_spdk_lock 00:06:57.614 ************************************ 00:06:57.614 22:57:49 thread.thread_spdk_lock -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:06:57.614 [2024-06-07 22:57:49.590298] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:06:57.614 [2024-06-07 22:57:49.590378] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4141377 ] 00:06:57.614 EAL: No free 2048 kB hugepages reported on node 1 00:06:57.614 [2024-06-07 22:57:49.705974] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:57.614 [2024-06-07 22:57:49.792523] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:06:57.614 [2024-06-07 22:57:49.792529] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.183 [2024-06-07 22:57:50.309488] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 961:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:06:58.183 [2024-06-07 22:57:50.309538] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3072:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:06:58.183 [2024-06-07 22:57:50.309553] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3027:sspin_stacks_print: *ERROR*: spinlock 0x14cb540 00:06:58.183 [2024-06-07 22:57:50.310596] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 856:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:06:58.183 [2024-06-07 22:57:50.310702] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:1022:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:06:58.183 [2024-06-07 22:57:50.310727] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 856:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:06:58.183 Starting test contend 00:06:58.183 Worker Delay Wait us Hold us Total us 00:06:58.183 0 3 159122 198762 357885 00:06:58.183 1 5 85517 297993 383510 00:06:58.183 PASS test contend 00:06:58.183 Starting test hold_by_poller 00:06:58.183 PASS test hold_by_poller 00:06:58.183 Starting test hold_by_message 00:06:58.183 PASS test hold_by_message 00:06:58.183 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock summary: 00:06:58.183 100014 assertions passed 00:06:58.183 0 assertions failed 00:06:58.183 00:06:58.183 real 0m0.810s 00:06:58.183 user 0m1.191s 00:06:58.183 sys 0m0.133s 00:06:58.183 22:57:50 thread.thread_spdk_lock -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:58.183 22:57:50 thread.thread_spdk_lock -- common/autotest_common.sh@10 -- # set +x 00:06:58.183 ************************************ 00:06:58.183 END TEST thread_spdk_lock 00:06:58.183 ************************************ 00:06:58.183 00:06:58.183 real 0m3.721s 00:06:58.183 user 0m3.627s 00:06:58.183 sys 0m0.623s 00:06:58.183 22:57:50 thread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:58.183 22:57:50 thread -- common/autotest_common.sh@10 -- # set +x 00:06:58.183 ************************************ 00:06:58.183 END TEST thread 00:06:58.183 ************************************ 00:06:58.442 22:57:50 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel.sh 00:06:58.442 22:57:50 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:58.442 22:57:50 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:58.442 22:57:50 -- common/autotest_common.sh@10 -- # set +x 00:06:58.442 ************************************ 00:06:58.442 START TEST accel 00:06:58.442 ************************************ 00:06:58.442 22:57:50 accel -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel.sh 00:06:58.442 * Looking for test storage... 00:06:58.442 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel 00:06:58.442 22:57:50 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:58.442 22:57:50 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:58.442 22:57:50 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:58.442 22:57:50 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=4141466 00:06:58.442 22:57:50 accel -- accel/accel.sh@63 -- # waitforlisten 4141466 00:06:58.442 22:57:50 accel -- common/autotest_common.sh@830 -- # '[' -z 4141466 ']' 00:06:58.442 22:57:50 accel -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.442 22:57:50 accel -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:58.442 22:57:50 accel -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.442 22:57:50 accel -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:58.442 22:57:50 accel -- common/autotest_common.sh@10 -- # set +x 00:06:58.442 22:57:50 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:58.442 22:57:50 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:58.442 22:57:50 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:58.442 22:57:50 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:58.442 22:57:50 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.442 22:57:50 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.442 22:57:50 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:58.442 22:57:50 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:58.442 22:57:50 accel -- accel/accel.sh@41 -- # jq -r . 00:06:58.442 [2024-06-07 22:57:50.637880] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:06:58.442 [2024-06-07 22:57:50.637960] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4141466 ] 00:06:58.442 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.701 [2024-06-07 22:57:50.754281] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.701 [2024-06-07 22:57:50.845217] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.638 22:57:51 accel -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:59.638 22:57:51 accel -- common/autotest_common.sh@863 -- # return 0 00:06:59.638 22:57:51 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:59.638 22:57:51 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:59.638 22:57:51 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:59.638 22:57:51 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:59.638 22:57:51 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:59.638 22:57:51 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:59.638 22:57:51 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:59.638 22:57:51 accel -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:59.638 22:57:51 accel -- common/autotest_common.sh@10 -- # set +x 00:06:59.638 22:57:51 accel -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:59.638 22:57:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:59.638 22:57:51 accel -- accel/accel.sh@72 -- # IFS== 00:06:59.638 22:57:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:59.638 22:57:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:59.638 22:57:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:59.638 22:57:51 accel -- accel/accel.sh@72 -- # IFS== 00:06:59.638 22:57:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:59.638 22:57:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:59.638 22:57:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:59.638 22:57:51 accel -- accel/accel.sh@72 -- # IFS== 00:06:59.638 22:57:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:59.638 22:57:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:59.638 22:57:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:59.638 22:57:51 accel -- accel/accel.sh@72 -- # IFS== 00:06:59.638 22:57:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:59.638 22:57:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:59.638 22:57:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:59.638 22:57:51 accel -- accel/accel.sh@72 -- # IFS== 00:06:59.638 22:57:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:59.638 22:57:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:59.638 22:57:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:59.638 22:57:51 accel -- accel/accel.sh@72 -- # IFS== 00:06:59.638 22:57:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:59.638 22:57:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:59.638 22:57:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:59.638 22:57:51 accel -- accel/accel.sh@72 -- # IFS== 00:06:59.638 22:57:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:59.638 22:57:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:59.638 22:57:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:59.638 22:57:51 accel -- accel/accel.sh@72 -- # IFS== 00:06:59.640 22:57:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:59.640 22:57:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:59.640 22:57:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:59.640 22:57:51 accel -- accel/accel.sh@72 -- # IFS== 00:06:59.640 22:57:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:59.640 22:57:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:59.640 22:57:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:59.640 22:57:51 accel -- accel/accel.sh@72 -- # IFS== 00:06:59.640 22:57:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:59.640 22:57:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:59.640 22:57:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:59.640 22:57:51 accel -- accel/accel.sh@72 -- # IFS== 00:06:59.640 22:57:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:59.640 22:57:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:59.640 22:57:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:59.640 22:57:51 accel -- accel/accel.sh@72 -- # IFS== 00:06:59.640 22:57:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:59.640 22:57:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:59.640 22:57:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:59.640 22:57:51 accel -- accel/accel.sh@72 -- # IFS== 00:06:59.640 22:57:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:59.640 22:57:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:59.640 22:57:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:59.640 22:57:51 accel -- accel/accel.sh@72 -- # IFS== 00:06:59.640 22:57:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:59.640 22:57:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:59.640 22:57:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:59.640 22:57:51 accel -- accel/accel.sh@72 -- # IFS== 00:06:59.640 22:57:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:59.640 22:57:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:59.640 22:57:51 accel -- accel/accel.sh@75 -- # killprocess 4141466 00:06:59.640 22:57:51 accel -- common/autotest_common.sh@949 -- # '[' -z 4141466 ']' 00:06:59.640 22:57:51 accel -- common/autotest_common.sh@953 -- # kill -0 4141466 00:06:59.640 22:57:51 accel -- common/autotest_common.sh@954 -- # uname 00:06:59.640 22:57:51 accel -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:59.640 22:57:51 accel -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4141466 00:06:59.640 22:57:51 accel -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:59.640 22:57:51 accel -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:59.640 22:57:51 accel -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4141466' 00:06:59.640 killing process with pid 4141466 00:06:59.640 22:57:51 accel -- common/autotest_common.sh@968 -- # kill 4141466 00:06:59.640 22:57:51 accel -- common/autotest_common.sh@973 -- # wait 4141466 00:06:59.900 22:57:51 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:59.900 22:57:51 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:59.900 22:57:51 accel -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:06:59.900 22:57:51 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:59.900 22:57:51 accel -- common/autotest_common.sh@10 -- # set +x 00:06:59.900 22:57:52 accel.accel_help -- common/autotest_common.sh@1124 -- # accel_perf -h 00:06:59.900 22:57:52 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:59.900 22:57:52 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:59.900 22:57:52 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:59.900 22:57:52 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:59.900 22:57:52 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.900 22:57:52 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.900 22:57:52 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:59.900 22:57:52 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:59.900 22:57:52 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:59.901 22:57:52 accel.accel_help -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:59.901 22:57:52 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:59.901 22:57:52 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:59.901 22:57:52 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:06:59.901 22:57:52 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:59.901 22:57:52 accel -- common/autotest_common.sh@10 -- # set +x 00:06:59.901 ************************************ 00:06:59.901 START TEST accel_missing_filename 00:06:59.901 ************************************ 00:06:59.901 22:57:52 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w compress 00:06:59.901 22:57:52 accel.accel_missing_filename -- common/autotest_common.sh@649 -- # local es=0 00:06:59.901 22:57:52 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:59.901 22:57:52 accel.accel_missing_filename -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:06:59.901 22:57:52 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:59.901 22:57:52 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # type -t accel_perf 00:06:59.901 22:57:52 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:59.901 22:57:52 accel.accel_missing_filename -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w compress 00:06:59.901 22:57:52 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:59.901 22:57:52 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:59.901 22:57:52 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:59.901 22:57:52 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:59.901 22:57:52 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.901 22:57:52 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.901 22:57:52 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:59.901 22:57:52 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:59.901 22:57:52 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:59.901 [2024-06-07 22:57:52.161622] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:06:59.901 [2024-06-07 22:57:52.161716] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4141767 ] 00:07:00.195 EAL: No free 2048 kB hugepages reported on node 1 00:07:00.195 [2024-06-07 22:57:52.281165] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.195 [2024-06-07 22:57:52.370766] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.195 [2024-06-07 22:57:52.413119] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:00.456 [2024-06-07 22:57:52.475339] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:07:00.456 A filename is required. 00:07:00.456 22:57:52 accel.accel_missing_filename -- common/autotest_common.sh@652 -- # es=234 00:07:00.456 22:57:52 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:07:00.456 22:57:52 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # es=106 00:07:00.456 22:57:52 accel.accel_missing_filename -- common/autotest_common.sh@662 -- # case "$es" in 00:07:00.456 22:57:52 accel.accel_missing_filename -- common/autotest_common.sh@669 -- # es=1 00:07:00.456 22:57:52 accel.accel_missing_filename -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:07:00.456 00:07:00.456 real 0m0.414s 00:07:00.456 user 0m0.268s 00:07:00.456 sys 0m0.183s 00:07:00.456 22:57:52 accel.accel_missing_filename -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:00.456 22:57:52 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:07:00.456 ************************************ 00:07:00.456 END TEST accel_missing_filename 00:07:00.456 ************************************ 00:07:00.456 22:57:52 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:07:00.456 22:57:52 accel -- common/autotest_common.sh@1100 -- # '[' 10 -le 1 ']' 00:07:00.456 22:57:52 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:00.456 22:57:52 accel -- common/autotest_common.sh@10 -- # set +x 00:07:00.456 ************************************ 00:07:00.456 START TEST accel_compress_verify 00:07:00.456 ************************************ 00:07:00.456 22:57:52 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:07:00.456 22:57:52 accel.accel_compress_verify -- common/autotest_common.sh@649 -- # local es=0 00:07:00.456 22:57:52 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:07:00.456 22:57:52 accel.accel_compress_verify -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:07:00.456 22:57:52 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:00.456 22:57:52 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # type -t accel_perf 00:07:00.456 22:57:52 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:00.456 22:57:52 accel.accel_compress_verify -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:07:00.457 22:57:52 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:07:00.457 22:57:52 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:00.457 22:57:52 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:00.457 22:57:52 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:00.457 22:57:52 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.457 22:57:52 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.457 22:57:52 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:00.457 22:57:52 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:00.457 22:57:52 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:07:00.457 [2024-06-07 22:57:52.658112] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:07:00.457 [2024-06-07 22:57:52.658198] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4142005 ] 00:07:00.457 EAL: No free 2048 kB hugepages reported on node 1 00:07:00.716 [2024-06-07 22:57:52.777083] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.716 [2024-06-07 22:57:52.864858] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.716 [2024-06-07 22:57:52.907663] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:00.716 [2024-06-07 22:57:52.969987] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:07:00.977 00:07:00.977 Compression does not support the verify option, aborting. 00:07:00.977 22:57:53 accel.accel_compress_verify -- common/autotest_common.sh@652 -- # es=161 00:07:00.978 22:57:53 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:07:00.978 22:57:53 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # es=33 00:07:00.978 22:57:53 accel.accel_compress_verify -- common/autotest_common.sh@662 -- # case "$es" in 00:07:00.978 22:57:53 accel.accel_compress_verify -- common/autotest_common.sh@669 -- # es=1 00:07:00.978 22:57:53 accel.accel_compress_verify -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:07:00.978 00:07:00.978 real 0m0.413s 00:07:00.978 user 0m0.273s 00:07:00.978 sys 0m0.179s 00:07:00.978 22:57:53 accel.accel_compress_verify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:00.978 22:57:53 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:07:00.978 ************************************ 00:07:00.978 END TEST accel_compress_verify 00:07:00.978 ************************************ 00:07:00.978 22:57:53 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:00.978 22:57:53 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:07:00.978 22:57:53 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:00.978 22:57:53 accel -- common/autotest_common.sh@10 -- # set +x 00:07:00.978 ************************************ 00:07:00.978 START TEST accel_wrong_workload 00:07:00.978 ************************************ 00:07:00.978 22:57:53 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w foobar 00:07:00.978 22:57:53 accel.accel_wrong_workload -- common/autotest_common.sh@649 -- # local es=0 00:07:00.978 22:57:53 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:00.978 22:57:53 accel.accel_wrong_workload -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:07:00.978 22:57:53 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:00.978 22:57:53 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # type -t accel_perf 00:07:00.978 22:57:53 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:00.978 22:57:53 accel.accel_wrong_workload -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w foobar 00:07:00.978 22:57:53 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:00.978 22:57:53 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:07:00.978 22:57:53 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:00.978 22:57:53 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:00.978 22:57:53 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.978 22:57:53 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.978 22:57:53 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:00.978 22:57:53 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:07:00.978 22:57:53 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:07:00.978 Unsupported workload type: foobar 00:07:00.978 [2024-06-07 22:57:53.148935] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:00.978 accel_perf options: 00:07:00.978 [-h help message] 00:07:00.978 [-q queue depth per core] 00:07:00.978 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:00.978 [-T number of threads per core 00:07:00.978 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:00.978 [-t time in seconds] 00:07:00.978 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:00.978 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:00.978 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:00.978 [-l for compress/decompress workloads, name of uncompressed input file 00:07:00.978 [-S for crc32c workload, use this seed value (default 0) 00:07:00.978 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:00.978 [-f for fill workload, use this BYTE value (default 255) 00:07:00.978 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:00.978 [-y verify result if this switch is on] 00:07:00.978 [-a tasks to allocate per core (default: same value as -q)] 00:07:00.978 Can be used to spread operations across a wider range of memory. 00:07:00.978 22:57:53 accel.accel_wrong_workload -- common/autotest_common.sh@652 -- # es=1 00:07:00.978 22:57:53 accel.accel_wrong_workload -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:07:00.978 22:57:53 accel.accel_wrong_workload -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:07:00.978 22:57:53 accel.accel_wrong_workload -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:07:00.978 00:07:00.978 real 0m0.030s 00:07:00.978 user 0m0.017s 00:07:00.978 sys 0m0.013s 00:07:00.978 22:57:53 accel.accel_wrong_workload -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:00.978 22:57:53 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:07:00.978 ************************************ 00:07:00.978 END TEST accel_wrong_workload 00:07:00.978 ************************************ 00:07:00.978 Error: writing output failed: Broken pipe 00:07:00.978 22:57:53 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:00.978 22:57:53 accel -- common/autotest_common.sh@1100 -- # '[' 10 -le 1 ']' 00:07:00.978 22:57:53 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:00.979 22:57:53 accel -- common/autotest_common.sh@10 -- # set +x 00:07:00.979 ************************************ 00:07:00.979 START TEST accel_negative_buffers 00:07:00.979 ************************************ 00:07:00.979 22:57:53 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:00.979 22:57:53 accel.accel_negative_buffers -- common/autotest_common.sh@649 -- # local es=0 00:07:00.979 22:57:53 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:00.979 22:57:53 accel.accel_negative_buffers -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:07:00.979 22:57:53 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:00.979 22:57:53 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # type -t accel_perf 00:07:00.979 22:57:53 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:00.979 22:57:53 accel.accel_negative_buffers -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w xor -y -x -1 00:07:00.979 22:57:53 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:00.979 22:57:53 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:07:00.979 22:57:53 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:00.979 22:57:53 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:00.979 22:57:53 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.979 22:57:53 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.979 22:57:53 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:00.979 22:57:53 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:07:00.979 22:57:53 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:07:01.239 -x option must be non-negative. 00:07:01.239 [2024-06-07 22:57:53.259473] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:01.239 accel_perf options: 00:07:01.239 [-h help message] 00:07:01.239 [-q queue depth per core] 00:07:01.239 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:01.239 [-T number of threads per core 00:07:01.239 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:01.239 [-t time in seconds] 00:07:01.239 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:01.239 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:01.239 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:01.239 [-l for compress/decompress workloads, name of uncompressed input file 00:07:01.239 [-S for crc32c workload, use this seed value (default 0) 00:07:01.239 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:01.239 [-f for fill workload, use this BYTE value (default 255) 00:07:01.239 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:01.239 [-y verify result if this switch is on] 00:07:01.239 [-a tasks to allocate per core (default: same value as -q)] 00:07:01.239 Can be used to spread operations across a wider range of memory. 00:07:01.239 22:57:53 accel.accel_negative_buffers -- common/autotest_common.sh@652 -- # es=1 00:07:01.239 22:57:53 accel.accel_negative_buffers -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:07:01.239 22:57:53 accel.accel_negative_buffers -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:07:01.239 22:57:53 accel.accel_negative_buffers -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:07:01.239 00:07:01.239 real 0m0.027s 00:07:01.239 user 0m0.010s 00:07:01.239 sys 0m0.017s 00:07:01.239 22:57:53 accel.accel_negative_buffers -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:01.239 22:57:53 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:07:01.239 ************************************ 00:07:01.239 END TEST accel_negative_buffers 00:07:01.239 ************************************ 00:07:01.239 Error: writing output failed: Broken pipe 00:07:01.239 22:57:53 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:01.239 22:57:53 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:07:01.239 22:57:53 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:01.239 22:57:53 accel -- common/autotest_common.sh@10 -- # set +x 00:07:01.239 ************************************ 00:07:01.239 START TEST accel_crc32c 00:07:01.239 ************************************ 00:07:01.239 22:57:53 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:01.239 22:57:53 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:01.239 22:57:53 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:01.239 22:57:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:01.239 22:57:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:01.239 22:57:53 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:01.239 22:57:53 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:01.239 22:57:53 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:01.239 22:57:53 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:01.239 22:57:53 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:01.239 22:57:53 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.239 22:57:53 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.239 22:57:53 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:01.239 22:57:53 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:01.239 22:57:53 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:01.239 [2024-06-07 22:57:53.366852] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:07:01.239 [2024-06-07 22:57:53.366940] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4142104 ] 00:07:01.239 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.239 [2024-06-07 22:57:53.485949] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.499 [2024-06-07 22:57:53.580470] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.499 22:57:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:01.499 22:57:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:01.499 22:57:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:01.499 22:57:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:01.499 22:57:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:01.499 22:57:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:01.499 22:57:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:01.499 22:57:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:01.499 22:57:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:01.499 22:57:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:01.499 22:57:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:01.499 22:57:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:01.499 22:57:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:01.499 22:57:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:01.499 22:57:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:01.499 22:57:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:01.499 22:57:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:01.499 22:57:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:01.499 22:57:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:01.499 22:57:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:01.499 22:57:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:07:01.499 22:57:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:01.499 22:57:53 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:01.499 22:57:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:01.499 22:57:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:01.499 22:57:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:01.499 22:57:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:01.499 22:57:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:01.499 22:57:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:01.499 22:57:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:01.499 22:57:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:01.499 22:57:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:01.499 22:57:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:01.499 22:57:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:01.499 22:57:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:01.499 22:57:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:01.499 22:57:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:01.499 22:57:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:07:01.499 22:57:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:01.499 22:57:53 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:01.499 22:57:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:01.499 22:57:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:01.499 22:57:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:01.499 22:57:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:01.499 22:57:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:01.499 22:57:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:01.499 22:57:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:01.499 22:57:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:01.499 22:57:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:01.499 22:57:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:01.499 22:57:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:07:01.499 22:57:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:01.499 22:57:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:01.499 22:57:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:01.499 22:57:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:01.499 22:57:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:01.499 22:57:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:01.499 22:57:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:01.499 22:57:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:01.499 22:57:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:01.499 22:57:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:01.499 22:57:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:01.499 22:57:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:01.499 22:57:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:01.499 22:57:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:01.499 22:57:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:01.499 22:57:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:01.499 22:57:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:01.499 22:57:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:01.499 22:57:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.880 22:57:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:02.880 22:57:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:02.880 22:57:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.880 22:57:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.880 22:57:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:02.880 22:57:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:02.880 22:57:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.880 22:57:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.880 22:57:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:02.880 22:57:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:02.880 22:57:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.880 22:57:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.880 22:57:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:02.880 22:57:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:02.880 22:57:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.880 22:57:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.880 22:57:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:02.880 22:57:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:02.880 22:57:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.880 22:57:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.880 22:57:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:02.880 22:57:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:02.880 22:57:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.880 22:57:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.880 22:57:54 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:02.880 22:57:54 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:02.880 22:57:54 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:02.880 00:07:02.880 real 0m1.422s 00:07:02.880 user 0m1.245s 00:07:02.880 sys 0m0.182s 00:07:02.880 22:57:54 accel.accel_crc32c -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:02.880 22:57:54 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:02.880 ************************************ 00:07:02.880 END TEST accel_crc32c 00:07:02.880 ************************************ 00:07:02.880 22:57:54 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:02.880 22:57:54 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:07:02.880 22:57:54 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:02.880 22:57:54 accel -- common/autotest_common.sh@10 -- # set +x 00:07:02.880 ************************************ 00:07:02.880 START TEST accel_crc32c_C2 00:07:02.880 ************************************ 00:07:02.880 22:57:54 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:02.880 22:57:54 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:02.880 22:57:54 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:02.880 22:57:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.880 22:57:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.880 22:57:54 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:02.880 22:57:54 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:02.880 22:57:54 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:02.880 22:57:54 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:02.880 22:57:54 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:02.880 22:57:54 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.880 22:57:54 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.880 22:57:54 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:02.880 22:57:54 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:02.880 22:57:54 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:02.880 [2024-06-07 22:57:54.863590] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:07:02.881 [2024-06-07 22:57:54.863674] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4142398 ] 00:07:02.881 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.881 [2024-06-07 22:57:54.981860] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.881 [2024-06-07 22:57:55.071151] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.881 22:57:55 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:02.881 22:57:55 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.881 22:57:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.881 22:57:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.881 22:57:55 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:02.881 22:57:55 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.881 22:57:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.881 22:57:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.881 22:57:55 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:02.881 22:57:55 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.881 22:57:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.881 22:57:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.881 22:57:55 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:02.881 22:57:55 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.881 22:57:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.881 22:57:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.881 22:57:55 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:02.881 22:57:55 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.881 22:57:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.881 22:57:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.881 22:57:55 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:07:02.881 22:57:55 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.881 22:57:55 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:02.881 22:57:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.881 22:57:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.881 22:57:55 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:02.881 22:57:55 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.881 22:57:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.881 22:57:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.881 22:57:55 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:02.881 22:57:55 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.881 22:57:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.881 22:57:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.881 22:57:55 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:02.881 22:57:55 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.881 22:57:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.881 22:57:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.881 22:57:55 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:02.881 22:57:55 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.881 22:57:55 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:02.881 22:57:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.881 22:57:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.881 22:57:55 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:02.881 22:57:55 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.881 22:57:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.881 22:57:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.881 22:57:55 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:02.881 22:57:55 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.881 22:57:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.881 22:57:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.881 22:57:55 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:02.881 22:57:55 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.881 22:57:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.881 22:57:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.881 22:57:55 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:02.881 22:57:55 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.881 22:57:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.881 22:57:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.881 22:57:55 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:02.881 22:57:55 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.881 22:57:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.881 22:57:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.881 22:57:55 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:02.881 22:57:55 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.881 22:57:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.881 22:57:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.881 22:57:55 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:02.881 22:57:55 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.881 22:57:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.881 22:57:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.262 22:57:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:04.262 22:57:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.262 22:57:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.262 22:57:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.262 22:57:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:04.262 22:57:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.263 22:57:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.263 22:57:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.263 22:57:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:04.263 22:57:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.263 22:57:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.263 22:57:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.263 22:57:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:04.263 22:57:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.263 22:57:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.263 22:57:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.263 22:57:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:04.263 22:57:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.263 22:57:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.263 22:57:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.263 22:57:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:04.263 22:57:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.263 22:57:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.263 22:57:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.263 22:57:56 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:04.263 22:57:56 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:04.263 22:57:56 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:04.263 00:07:04.263 real 0m1.416s 00:07:04.263 user 0m1.244s 00:07:04.263 sys 0m0.176s 00:07:04.263 22:57:56 accel.accel_crc32c_C2 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:04.263 22:57:56 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:04.263 ************************************ 00:07:04.263 END TEST accel_crc32c_C2 00:07:04.263 ************************************ 00:07:04.263 22:57:56 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:04.263 22:57:56 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:07:04.263 22:57:56 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:04.263 22:57:56 accel -- common/autotest_common.sh@10 -- # set +x 00:07:04.263 ************************************ 00:07:04.263 START TEST accel_copy 00:07:04.263 ************************************ 00:07:04.263 22:57:56 accel.accel_copy -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w copy -y 00:07:04.263 22:57:56 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:04.263 22:57:56 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:07:04.263 22:57:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:04.263 22:57:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:04.263 22:57:56 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:04.263 22:57:56 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:04.263 22:57:56 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:04.263 22:57:56 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:04.263 22:57:56 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:04.263 22:57:56 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.263 22:57:56 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.263 22:57:56 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:04.263 22:57:56 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:04.263 22:57:56 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:07:04.263 [2024-06-07 22:57:56.353931] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:07:04.263 [2024-06-07 22:57:56.354021] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4142679 ] 00:07:04.263 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.263 [2024-06-07 22:57:56.467694] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.524 [2024-06-07 22:57:56.556692] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.524 22:57:56 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:04.524 22:57:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:04.524 22:57:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:04.524 22:57:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:04.524 22:57:56 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:04.524 22:57:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:04.524 22:57:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:04.524 22:57:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:04.524 22:57:56 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:07:04.524 22:57:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:04.524 22:57:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:04.524 22:57:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:04.524 22:57:56 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:04.524 22:57:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:04.524 22:57:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:04.524 22:57:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:04.524 22:57:56 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:04.524 22:57:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:04.524 22:57:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:04.524 22:57:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:04.524 22:57:56 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:07:04.524 22:57:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:04.524 22:57:56 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:07:04.524 22:57:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:04.524 22:57:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:04.524 22:57:56 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:04.524 22:57:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:04.524 22:57:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:04.524 22:57:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:04.524 22:57:56 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:04.524 22:57:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:04.524 22:57:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:04.524 22:57:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:04.524 22:57:56 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:07:04.524 22:57:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:04.524 22:57:56 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:04.524 22:57:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:04.524 22:57:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:04.524 22:57:56 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:04.524 22:57:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:04.524 22:57:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:04.524 22:57:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:04.524 22:57:56 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:04.524 22:57:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:04.524 22:57:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:04.524 22:57:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:04.524 22:57:56 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:07:04.524 22:57:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:04.524 22:57:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:04.524 22:57:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:04.524 22:57:56 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:04.524 22:57:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:04.524 22:57:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:04.524 22:57:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:04.524 22:57:56 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:07:04.524 22:57:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:04.524 22:57:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:04.524 22:57:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:04.524 22:57:56 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:04.524 22:57:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:04.524 22:57:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:04.524 22:57:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:04.524 22:57:56 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:04.524 22:57:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:04.524 22:57:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:04.524 22:57:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:05.463 22:57:57 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:05.463 22:57:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:05.463 22:57:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:05.463 22:57:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:05.463 22:57:57 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:05.722 22:57:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:05.722 22:57:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:05.722 22:57:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:05.722 22:57:57 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:05.722 22:57:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:05.722 22:57:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:05.722 22:57:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:05.722 22:57:57 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:05.722 22:57:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:05.722 22:57:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:05.722 22:57:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:05.722 22:57:57 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:05.722 22:57:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:05.722 22:57:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:05.722 22:57:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:05.722 22:57:57 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:05.722 22:57:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:05.722 22:57:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:05.722 22:57:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:05.722 22:57:57 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:05.722 22:57:57 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:07:05.722 22:57:57 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:05.722 00:07:05.722 real 0m1.410s 00:07:05.722 user 0m1.251s 00:07:05.722 sys 0m0.163s 00:07:05.722 22:57:57 accel.accel_copy -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:05.722 22:57:57 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:07:05.722 ************************************ 00:07:05.722 END TEST accel_copy 00:07:05.722 ************************************ 00:07:05.722 22:57:57 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:05.722 22:57:57 accel -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:07:05.723 22:57:57 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:05.723 22:57:57 accel -- common/autotest_common.sh@10 -- # set +x 00:07:05.723 ************************************ 00:07:05.723 START TEST accel_fill 00:07:05.723 ************************************ 00:07:05.723 22:57:57 accel.accel_fill -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:05.723 22:57:57 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:07:05.723 22:57:57 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:07:05.723 22:57:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:05.723 22:57:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:05.723 22:57:57 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:05.723 22:57:57 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:05.723 22:57:57 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:07:05.723 22:57:57 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:05.723 22:57:57 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:05.723 22:57:57 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.723 22:57:57 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.723 22:57:57 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:05.723 22:57:57 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:07:05.723 22:57:57 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:07:05.723 [2024-06-07 22:57:57.841099] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:07:05.723 [2024-06-07 22:57:57.841166] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4142964 ] 00:07:05.723 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.723 [2024-06-07 22:57:57.959730] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.982 [2024-06-07 22:57:58.050926] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.982 22:57:58 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:05.982 22:57:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:05.982 22:57:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:05.982 22:57:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:05.982 22:57:58 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:05.982 22:57:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:05.982 22:57:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:05.982 22:57:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:05.982 22:57:58 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:07:05.982 22:57:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:05.982 22:57:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:05.982 22:57:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:05.982 22:57:58 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:05.982 22:57:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:05.982 22:57:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:05.982 22:57:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:05.982 22:57:58 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:05.982 22:57:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:05.982 22:57:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:05.982 22:57:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:05.982 22:57:58 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:07:05.982 22:57:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:05.982 22:57:58 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:07:05.982 22:57:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:05.982 22:57:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:05.982 22:57:58 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:07:05.982 22:57:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:05.982 22:57:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:05.982 22:57:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:05.982 22:57:58 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:05.982 22:57:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:05.982 22:57:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:05.982 22:57:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:05.982 22:57:58 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:05.982 22:57:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:05.982 22:57:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:05.983 22:57:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:05.983 22:57:58 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:07:05.983 22:57:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:05.983 22:57:58 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:07:05.983 22:57:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:05.983 22:57:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:05.983 22:57:58 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:05.983 22:57:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:05.983 22:57:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:05.983 22:57:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:05.983 22:57:58 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:05.983 22:57:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:05.983 22:57:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:05.983 22:57:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:05.983 22:57:58 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:07:05.983 22:57:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:05.983 22:57:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:05.983 22:57:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:05.983 22:57:58 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:07:05.983 22:57:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:05.983 22:57:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:05.983 22:57:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:05.983 22:57:58 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:07:05.983 22:57:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:05.983 22:57:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:05.983 22:57:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:05.983 22:57:58 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:05.983 22:57:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:05.983 22:57:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:05.983 22:57:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:05.983 22:57:58 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:05.983 22:57:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:05.983 22:57:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:05.983 22:57:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:07.361 22:57:59 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:07.361 22:57:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:07.361 22:57:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:07.361 22:57:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:07.361 22:57:59 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:07.361 22:57:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:07.361 22:57:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:07.361 22:57:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:07.361 22:57:59 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:07.361 22:57:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:07.361 22:57:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:07.361 22:57:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:07.361 22:57:59 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:07.361 22:57:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:07.361 22:57:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:07.361 22:57:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:07.361 22:57:59 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:07.361 22:57:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:07.361 22:57:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:07.361 22:57:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:07.361 22:57:59 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:07.361 22:57:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:07.361 22:57:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:07.361 22:57:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:07.361 22:57:59 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:07.361 22:57:59 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:07:07.361 22:57:59 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:07.361 00:07:07.361 real 0m1.416s 00:07:07.361 user 0m1.256s 00:07:07.361 sys 0m0.165s 00:07:07.361 22:57:59 accel.accel_fill -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:07.361 22:57:59 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:07:07.361 ************************************ 00:07:07.361 END TEST accel_fill 00:07:07.361 ************************************ 00:07:07.361 22:57:59 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:07.361 22:57:59 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:07:07.361 22:57:59 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:07.361 22:57:59 accel -- common/autotest_common.sh@10 -- # set +x 00:07:07.361 ************************************ 00:07:07.361 START TEST accel_copy_crc32c 00:07:07.361 ************************************ 00:07:07.361 22:57:59 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w copy_crc32c -y 00:07:07.361 22:57:59 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:07.361 22:57:59 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:07.361 22:57:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.361 22:57:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.361 22:57:59 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:07.361 22:57:59 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:07.361 22:57:59 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:07.361 22:57:59 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:07.361 22:57:59 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:07.361 22:57:59 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.361 22:57:59 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.361 22:57:59 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:07.361 22:57:59 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:07.361 22:57:59 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:07.361 [2024-06-07 22:57:59.328882] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:07:07.361 [2024-06-07 22:57:59.328979] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4143263 ] 00:07:07.361 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.361 [2024-06-07 22:57:59.444249] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.361 [2024-06-07 22:57:59.530398] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.361 22:57:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:07.361 22:57:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.361 22:57:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.361 22:57:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.361 22:57:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:07.361 22:57:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.361 22:57:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.361 22:57:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.361 22:57:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:07.361 22:57:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.361 22:57:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.361 22:57:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.361 22:57:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:07.361 22:57:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.361 22:57:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.361 22:57:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.361 22:57:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:07.361 22:57:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.361 22:57:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.361 22:57:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.361 22:57:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:07.361 22:57:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.361 22:57:59 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:07.361 22:57:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.361 22:57:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.361 22:57:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:07:07.361 22:57:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.361 22:57:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.361 22:57:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.361 22:57:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:07.361 22:57:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.361 22:57:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.361 22:57:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.361 22:57:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:07.361 22:57:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.361 22:57:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.361 22:57:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.361 22:57:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:07.361 22:57:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.361 22:57:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.361 22:57:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.361 22:57:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:07:07.361 22:57:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.361 22:57:59 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:07.361 22:57:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.361 22:57:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.361 22:57:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:07.361 22:57:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.361 22:57:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.361 22:57:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.361 22:57:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:07.361 22:57:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.361 22:57:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.361 22:57:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.361 22:57:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:07:07.361 22:57:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.361 22:57:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.361 22:57:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.361 22:57:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:07.361 22:57:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.361 22:57:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.361 22:57:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.361 22:57:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:07.361 22:57:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.361 22:57:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.361 22:57:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.361 22:57:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:07.361 22:57:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.361 22:57:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.361 22:57:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.361 22:57:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:07.361 22:57:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.361 22:57:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.361 22:57:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:08.736 22:58:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:08.736 22:58:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:08.736 22:58:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:08.736 22:58:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:08.736 22:58:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:08.736 22:58:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:08.736 22:58:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:08.736 22:58:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:08.736 22:58:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:08.736 22:58:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:08.736 22:58:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:08.736 22:58:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:08.736 22:58:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:08.736 22:58:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:08.736 22:58:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:08.736 22:58:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:08.736 22:58:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:08.736 22:58:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:08.736 22:58:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:08.736 22:58:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:08.736 22:58:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:08.736 22:58:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:08.736 22:58:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:08.736 22:58:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:08.736 22:58:00 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:08.736 22:58:00 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:08.736 22:58:00 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:08.736 00:07:08.736 real 0m1.408s 00:07:08.736 user 0m1.235s 00:07:08.736 sys 0m0.178s 00:07:08.736 22:58:00 accel.accel_copy_crc32c -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:08.736 22:58:00 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:08.736 ************************************ 00:07:08.736 END TEST accel_copy_crc32c 00:07:08.736 ************************************ 00:07:08.736 22:58:00 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:08.736 22:58:00 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:07:08.736 22:58:00 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:08.737 22:58:00 accel -- common/autotest_common.sh@10 -- # set +x 00:07:08.737 ************************************ 00:07:08.737 START TEST accel_copy_crc32c_C2 00:07:08.737 ************************************ 00:07:08.737 22:58:00 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:08.737 22:58:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:08.737 22:58:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:08.737 22:58:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:08.737 22:58:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:08.737 22:58:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:08.737 22:58:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:08.737 22:58:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:08.737 22:58:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:08.737 22:58:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:08.737 22:58:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.737 22:58:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.737 22:58:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:08.737 22:58:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:08.737 22:58:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:08.737 [2024-06-07 22:58:00.806930] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:07:08.737 [2024-06-07 22:58:00.807008] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4143544 ] 00:07:08.737 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.737 [2024-06-07 22:58:00.922735] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.737 [2024-06-07 22:58:01.008564] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.996 22:58:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:08.996 22:58:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.996 22:58:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:08.996 22:58:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:08.996 22:58:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:08.996 22:58:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.996 22:58:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:08.996 22:58:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:08.996 22:58:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:08.996 22:58:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.996 22:58:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:08.996 22:58:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:08.996 22:58:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:08.996 22:58:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.996 22:58:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:08.996 22:58:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:08.996 22:58:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:08.996 22:58:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.996 22:58:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:08.996 22:58:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:08.996 22:58:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:08.996 22:58:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.996 22:58:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:08.996 22:58:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:08.996 22:58:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:08.996 22:58:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:08.996 22:58:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.996 22:58:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:08.996 22:58:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:08.996 22:58:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:08.996 22:58:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.996 22:58:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:08.996 22:58:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:08.996 22:58:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:07:08.996 22:58:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.996 22:58:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:08.996 22:58:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:08.996 22:58:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:08.996 22:58:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.996 22:58:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:08.996 22:58:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:08.996 22:58:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:08.996 22:58:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.996 22:58:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:08.996 22:58:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:08.996 22:58:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:08.996 22:58:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:08.996 22:58:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.996 22:58:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:08.996 22:58:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:08.996 22:58:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:08.996 22:58:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.996 22:58:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:08.996 22:58:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:08.996 22:58:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:08.996 22:58:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.996 22:58:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:08.996 22:58:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:08.996 22:58:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:08.996 22:58:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.996 22:58:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:08.996 22:58:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:08.996 22:58:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:08.996 22:58:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.996 22:58:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:08.996 22:58:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:08.997 22:58:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:08.997 22:58:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.997 22:58:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:08.997 22:58:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:08.997 22:58:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:08.997 22:58:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.997 22:58:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:08.997 22:58:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:09.933 22:58:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:09.933 22:58:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:09.933 22:58:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:09.933 22:58:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:09.933 22:58:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:09.933 22:58:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:09.934 22:58:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:09.934 22:58:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:09.934 22:58:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:09.934 22:58:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:09.934 22:58:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:09.934 22:58:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:09.934 22:58:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:09.934 22:58:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:09.934 22:58:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:09.934 22:58:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:09.934 22:58:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:09.934 22:58:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:09.934 22:58:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:09.934 22:58:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:09.934 22:58:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:09.934 22:58:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:09.934 22:58:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:09.934 22:58:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:09.934 22:58:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:09.934 22:58:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:09.934 22:58:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:09.934 00:07:09.934 real 0m1.408s 00:07:09.934 user 0m1.239s 00:07:09.934 sys 0m0.175s 00:07:09.934 22:58:02 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:09.934 22:58:02 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:09.934 ************************************ 00:07:09.934 END TEST accel_copy_crc32c_C2 00:07:09.934 ************************************ 00:07:10.193 22:58:02 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:10.193 22:58:02 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:07:10.193 22:58:02 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:10.193 22:58:02 accel -- common/autotest_common.sh@10 -- # set +x 00:07:10.193 ************************************ 00:07:10.193 START TEST accel_dualcast 00:07:10.193 ************************************ 00:07:10.193 22:58:02 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dualcast -y 00:07:10.193 22:58:02 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:07:10.193 22:58:02 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:07:10.193 22:58:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:10.193 22:58:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:10.193 22:58:02 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:10.193 22:58:02 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:10.193 22:58:02 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:07:10.193 22:58:02 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:10.193 22:58:02 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:10.193 22:58:02 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.193 22:58:02 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.193 22:58:02 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:10.193 22:58:02 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:07:10.193 22:58:02 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:07:10.193 [2024-06-07 22:58:02.296772] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:07:10.193 [2024-06-07 22:58:02.296854] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4143831 ] 00:07:10.193 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.193 [2024-06-07 22:58:02.401735] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.452 [2024-06-07 22:58:02.486093] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.452 22:58:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:10.452 22:58:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:10.452 22:58:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:10.452 22:58:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:10.452 22:58:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:10.452 22:58:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:10.452 22:58:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:10.452 22:58:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:10.452 22:58:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:07:10.452 22:58:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:10.452 22:58:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:10.452 22:58:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:10.452 22:58:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:10.453 22:58:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:10.453 22:58:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:10.453 22:58:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:10.453 22:58:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:10.453 22:58:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:10.453 22:58:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:10.453 22:58:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:10.453 22:58:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:07:10.453 22:58:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:10.453 22:58:02 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:07:10.453 22:58:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:10.453 22:58:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:10.453 22:58:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:10.453 22:58:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:10.453 22:58:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:10.453 22:58:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:10.453 22:58:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:10.453 22:58:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:10.453 22:58:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:10.453 22:58:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:10.453 22:58:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:07:10.453 22:58:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:10.453 22:58:02 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:07:10.453 22:58:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:10.453 22:58:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:10.453 22:58:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:10.453 22:58:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:10.453 22:58:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:10.453 22:58:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:10.453 22:58:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:10.453 22:58:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:10.453 22:58:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:10.453 22:58:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:10.453 22:58:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:07:10.453 22:58:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:10.453 22:58:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:10.453 22:58:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:10.453 22:58:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:07:10.453 22:58:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:10.453 22:58:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:10.453 22:58:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:10.453 22:58:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:07:10.453 22:58:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:10.453 22:58:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:10.453 22:58:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:10.453 22:58:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:10.453 22:58:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:10.453 22:58:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:10.453 22:58:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:10.453 22:58:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:10.453 22:58:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:10.453 22:58:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:10.453 22:58:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:11.830 22:58:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:11.830 22:58:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:11.830 22:58:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:11.830 22:58:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:11.830 22:58:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:11.830 22:58:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:11.830 22:58:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:11.830 22:58:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:11.830 22:58:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:11.830 22:58:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:11.830 22:58:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:11.830 22:58:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:11.830 22:58:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:11.830 22:58:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:11.830 22:58:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:11.830 22:58:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:11.830 22:58:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:11.830 22:58:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:11.830 22:58:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:11.830 22:58:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:11.830 22:58:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:11.830 22:58:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:11.830 22:58:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:11.830 22:58:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:11.830 22:58:03 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:11.830 22:58:03 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:11.830 22:58:03 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:11.830 00:07:11.830 real 0m1.397s 00:07:11.830 user 0m1.245s 00:07:11.830 sys 0m0.157s 00:07:11.830 22:58:03 accel.accel_dualcast -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:11.830 22:58:03 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:07:11.830 ************************************ 00:07:11.830 END TEST accel_dualcast 00:07:11.830 ************************************ 00:07:11.830 22:58:03 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:11.830 22:58:03 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:07:11.830 22:58:03 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:11.830 22:58:03 accel -- common/autotest_common.sh@10 -- # set +x 00:07:11.830 ************************************ 00:07:11.830 START TEST accel_compare 00:07:11.830 ************************************ 00:07:11.830 22:58:03 accel.accel_compare -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w compare -y 00:07:11.830 22:58:03 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:07:11.830 22:58:03 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:07:11.830 22:58:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:11.830 22:58:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:11.830 22:58:03 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:11.830 22:58:03 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:11.830 22:58:03 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:07:11.830 22:58:03 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:11.830 22:58:03 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:11.830 22:58:03 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.830 22:58:03 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.830 22:58:03 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:11.830 22:58:03 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:07:11.830 22:58:03 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:07:11.830 [2024-06-07 22:58:03.766748] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:07:11.830 [2024-06-07 22:58:03.766822] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4144110 ] 00:07:11.830 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.830 [2024-06-07 22:58:03.883050] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.830 [2024-06-07 22:58:03.968978] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.830 22:58:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:11.830 22:58:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:11.830 22:58:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:11.830 22:58:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:11.830 22:58:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:11.830 22:58:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:11.830 22:58:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:11.830 22:58:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:11.830 22:58:04 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:07:11.830 22:58:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:11.830 22:58:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:11.830 22:58:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:11.830 22:58:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:11.830 22:58:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:11.830 22:58:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:11.830 22:58:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:11.831 22:58:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:11.831 22:58:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:11.831 22:58:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:11.831 22:58:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:11.831 22:58:04 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:07:11.831 22:58:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:11.831 22:58:04 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:07:11.831 22:58:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:11.831 22:58:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:11.831 22:58:04 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:11.831 22:58:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:11.831 22:58:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:11.831 22:58:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:11.831 22:58:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:11.831 22:58:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:11.831 22:58:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:11.831 22:58:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:11.831 22:58:04 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:07:11.831 22:58:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:11.831 22:58:04 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:07:11.831 22:58:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:11.831 22:58:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:11.831 22:58:04 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:11.831 22:58:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:11.831 22:58:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:11.831 22:58:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:11.831 22:58:04 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:11.831 22:58:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:11.831 22:58:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:11.831 22:58:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:11.831 22:58:04 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:07:11.831 22:58:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:11.831 22:58:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:11.831 22:58:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:11.831 22:58:04 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:07:11.831 22:58:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:11.831 22:58:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:11.831 22:58:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:11.831 22:58:04 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:07:11.831 22:58:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:11.831 22:58:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:11.831 22:58:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:11.831 22:58:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:11.831 22:58:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:11.831 22:58:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:11.831 22:58:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:11.831 22:58:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:11.831 22:58:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:11.831 22:58:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:11.831 22:58:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:13.206 22:58:05 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:13.206 22:58:05 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:13.206 22:58:05 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:13.206 22:58:05 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:13.206 22:58:05 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:13.206 22:58:05 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:13.206 22:58:05 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:13.206 22:58:05 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:13.206 22:58:05 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:13.206 22:58:05 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:13.206 22:58:05 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:13.206 22:58:05 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:13.206 22:58:05 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:13.206 22:58:05 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:13.206 22:58:05 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:13.206 22:58:05 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:13.206 22:58:05 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:13.206 22:58:05 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:13.206 22:58:05 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:13.206 22:58:05 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:13.206 22:58:05 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:13.206 22:58:05 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:13.206 22:58:05 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:13.206 22:58:05 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:13.206 22:58:05 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:13.206 22:58:05 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:13.206 22:58:05 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:13.206 00:07:13.206 real 0m1.409s 00:07:13.206 user 0m1.233s 00:07:13.206 sys 0m0.180s 00:07:13.206 22:58:05 accel.accel_compare -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:13.206 22:58:05 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:07:13.206 ************************************ 00:07:13.206 END TEST accel_compare 00:07:13.206 ************************************ 00:07:13.206 22:58:05 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:13.206 22:58:05 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:07:13.206 22:58:05 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:13.206 22:58:05 accel -- common/autotest_common.sh@10 -- # set +x 00:07:13.206 ************************************ 00:07:13.206 START TEST accel_xor 00:07:13.206 ************************************ 00:07:13.206 22:58:05 accel.accel_xor -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w xor -y 00:07:13.206 22:58:05 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:13.206 22:58:05 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:13.206 22:58:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:13.206 22:58:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:13.206 22:58:05 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:13.206 22:58:05 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:13.206 22:58:05 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:13.206 22:58:05 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:13.206 22:58:05 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:13.206 22:58:05 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:13.206 22:58:05 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.206 22:58:05 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:13.206 22:58:05 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:13.206 22:58:05 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:13.206 [2024-06-07 22:58:05.246136] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:07:13.206 [2024-06-07 22:58:05.246216] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4144402 ] 00:07:13.206 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.206 [2024-06-07 22:58:05.363501] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.206 [2024-06-07 22:58:05.448923] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.465 22:58:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:13.465 22:58:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:13.465 22:58:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:13.465 22:58:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:13.465 22:58:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:13.465 22:58:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:13.465 22:58:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:13.465 22:58:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:13.465 22:58:05 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:13.465 22:58:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:13.465 22:58:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:13.465 22:58:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:13.466 22:58:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:13.466 22:58:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:13.466 22:58:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:13.466 22:58:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:13.466 22:58:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:13.466 22:58:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:13.466 22:58:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:13.466 22:58:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:13.466 22:58:05 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:13.466 22:58:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:13.466 22:58:05 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:13.466 22:58:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:13.466 22:58:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:13.466 22:58:05 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:07:13.466 22:58:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:13.466 22:58:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:13.466 22:58:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:13.466 22:58:05 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:13.466 22:58:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:13.466 22:58:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:13.466 22:58:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:13.466 22:58:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:13.466 22:58:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:13.466 22:58:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:13.466 22:58:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:13.466 22:58:05 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:13.466 22:58:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:13.466 22:58:05 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:13.466 22:58:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:13.466 22:58:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:13.466 22:58:05 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:13.466 22:58:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:13.466 22:58:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:13.466 22:58:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:13.466 22:58:05 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:13.466 22:58:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:13.466 22:58:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:13.466 22:58:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:13.466 22:58:05 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:13.466 22:58:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:13.466 22:58:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:13.466 22:58:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:13.466 22:58:05 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:13.466 22:58:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:13.466 22:58:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:13.466 22:58:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:13.466 22:58:05 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:13.466 22:58:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:13.466 22:58:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:13.466 22:58:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:13.466 22:58:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:13.466 22:58:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:13.466 22:58:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:13.466 22:58:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:13.466 22:58:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:13.466 22:58:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:13.466 22:58:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:13.466 22:58:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:14.403 22:58:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:14.403 22:58:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:14.403 22:58:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:14.403 22:58:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:14.403 22:58:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:14.403 22:58:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:14.403 22:58:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:14.403 22:58:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:14.403 22:58:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:14.403 22:58:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:14.403 22:58:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:14.403 22:58:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:14.403 22:58:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:14.403 22:58:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:14.403 22:58:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:14.403 22:58:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:14.403 22:58:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:14.403 22:58:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:14.403 22:58:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:14.403 22:58:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:14.403 22:58:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:14.403 22:58:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:14.403 22:58:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:14.403 22:58:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:14.403 22:58:06 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:14.403 22:58:06 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:14.403 22:58:06 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:14.403 00:07:14.403 real 0m1.411s 00:07:14.403 user 0m1.235s 00:07:14.403 sys 0m0.180s 00:07:14.403 22:58:06 accel.accel_xor -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:14.403 22:58:06 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:14.403 ************************************ 00:07:14.403 END TEST accel_xor 00:07:14.403 ************************************ 00:07:14.403 22:58:06 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:14.403 22:58:06 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:07:14.403 22:58:06 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:14.403 22:58:06 accel -- common/autotest_common.sh@10 -- # set +x 00:07:14.662 ************************************ 00:07:14.662 START TEST accel_xor 00:07:14.662 ************************************ 00:07:14.662 22:58:06 accel.accel_xor -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w xor -y -x 3 00:07:14.662 22:58:06 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:14.662 22:58:06 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:14.662 22:58:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:14.662 22:58:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:14.662 22:58:06 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:14.662 22:58:06 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:14.662 22:58:06 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:14.662 22:58:06 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:14.662 22:58:06 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:14.662 22:58:06 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:14.662 22:58:06 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:14.662 22:58:06 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:14.662 22:58:06 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:14.662 22:58:06 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:14.662 [2024-06-07 22:58:06.724915] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:07:14.662 [2024-06-07 22:58:06.724996] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4144681 ] 00:07:14.662 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.662 [2024-06-07 22:58:06.842689] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.662 [2024-06-07 22:58:06.928485] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.921 22:58:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:14.921 22:58:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:14.921 22:58:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:14.921 22:58:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:14.921 22:58:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:14.921 22:58:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:14.921 22:58:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:14.921 22:58:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:14.921 22:58:06 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:14.921 22:58:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:14.921 22:58:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:14.922 22:58:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:14.922 22:58:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:14.922 22:58:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:14.922 22:58:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:14.922 22:58:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:14.922 22:58:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:14.922 22:58:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:14.922 22:58:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:14.922 22:58:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:14.922 22:58:06 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:14.922 22:58:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:14.922 22:58:06 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:14.922 22:58:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:14.922 22:58:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:14.922 22:58:06 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:07:14.922 22:58:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:14.922 22:58:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:14.922 22:58:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:14.922 22:58:06 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:14.922 22:58:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:14.922 22:58:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:14.922 22:58:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:14.922 22:58:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:14.922 22:58:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:14.922 22:58:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:14.922 22:58:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:14.922 22:58:06 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:14.922 22:58:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:14.922 22:58:06 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:14.922 22:58:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:14.922 22:58:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:14.922 22:58:06 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:14.922 22:58:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:14.922 22:58:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:14.922 22:58:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:14.922 22:58:06 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:14.922 22:58:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:14.922 22:58:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:14.922 22:58:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:14.922 22:58:06 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:14.922 22:58:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:14.922 22:58:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:14.922 22:58:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:14.922 22:58:06 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:14.922 22:58:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:14.922 22:58:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:14.922 22:58:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:14.922 22:58:06 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:14.922 22:58:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:14.922 22:58:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:14.922 22:58:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:14.922 22:58:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:14.922 22:58:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:14.922 22:58:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:14.922 22:58:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:14.922 22:58:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:14.922 22:58:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:14.922 22:58:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:14.922 22:58:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:15.859 22:58:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:15.859 22:58:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:15.859 22:58:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:15.859 22:58:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:15.859 22:58:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:15.859 22:58:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:15.859 22:58:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:15.859 22:58:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:15.859 22:58:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:15.859 22:58:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:15.859 22:58:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:15.859 22:58:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:15.859 22:58:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:15.859 22:58:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:15.859 22:58:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:15.859 22:58:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:15.859 22:58:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:15.859 22:58:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:15.859 22:58:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:15.859 22:58:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:15.859 22:58:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:15.859 22:58:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:15.859 22:58:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:15.859 22:58:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:15.859 22:58:08 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:15.859 22:58:08 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:15.859 22:58:08 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:15.859 00:07:15.859 real 0m1.410s 00:07:15.859 user 0m1.242s 00:07:15.859 sys 0m0.172s 00:07:15.859 22:58:08 accel.accel_xor -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:15.859 22:58:08 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:15.859 ************************************ 00:07:15.859 END TEST accel_xor 00:07:15.859 ************************************ 00:07:16.119 22:58:08 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:16.119 22:58:08 accel -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:07:16.119 22:58:08 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:16.119 22:58:08 accel -- common/autotest_common.sh@10 -- # set +x 00:07:16.119 ************************************ 00:07:16.119 START TEST accel_dif_verify 00:07:16.119 ************************************ 00:07:16.119 22:58:08 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dif_verify 00:07:16.119 22:58:08 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:07:16.119 22:58:08 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:07:16.119 22:58:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:16.119 22:58:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:16.119 22:58:08 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:16.119 22:58:08 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:16.119 22:58:08 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:16.119 22:58:08 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:16.119 22:58:08 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:16.119 22:58:08 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.119 22:58:08 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.119 22:58:08 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:16.119 22:58:08 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:16.119 22:58:08 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:07:16.119 [2024-06-07 22:58:08.206589] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:07:16.119 [2024-06-07 22:58:08.206670] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4144968 ] 00:07:16.119 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.119 [2024-06-07 22:58:08.325735] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.381 [2024-06-07 22:58:08.411955] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.381 22:58:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:16.381 22:58:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:16.381 22:58:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:16.381 22:58:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:16.381 22:58:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:16.381 22:58:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:16.381 22:58:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:16.381 22:58:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:16.381 22:58:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:07:16.381 22:58:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:16.381 22:58:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:16.381 22:58:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:16.381 22:58:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:16.381 22:58:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:16.381 22:58:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:16.381 22:58:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:16.381 22:58:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:16.381 22:58:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:16.381 22:58:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:16.381 22:58:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:16.381 22:58:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:07:16.381 22:58:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:16.381 22:58:08 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:07:16.381 22:58:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:16.381 22:58:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:16.381 22:58:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:16.381 22:58:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:16.381 22:58:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:16.381 22:58:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:16.381 22:58:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:16.381 22:58:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:16.381 22:58:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:16.382 22:58:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:16.382 22:58:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:07:16.382 22:58:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:16.382 22:58:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:16.382 22:58:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:16.382 22:58:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:07:16.382 22:58:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:16.382 22:58:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:16.382 22:58:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:16.382 22:58:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:16.382 22:58:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:16.382 22:58:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:16.382 22:58:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:16.382 22:58:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:07:16.382 22:58:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:16.382 22:58:08 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:07:16.382 22:58:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:16.382 22:58:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:16.382 22:58:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:16.382 22:58:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:16.382 22:58:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:16.382 22:58:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:16.382 22:58:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:16.382 22:58:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:16.382 22:58:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:16.382 22:58:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:16.382 22:58:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:07:16.382 22:58:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:16.382 22:58:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:16.382 22:58:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:16.382 22:58:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:07:16.382 22:58:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:16.382 22:58:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:16.382 22:58:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:16.382 22:58:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:07:16.382 22:58:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:16.382 22:58:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:16.382 22:58:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:16.382 22:58:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:16.382 22:58:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:16.382 22:58:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:16.382 22:58:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:16.382 22:58:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:16.382 22:58:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:16.382 22:58:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:16.382 22:58:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:17.389 22:58:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:17.389 22:58:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:17.389 22:58:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:17.389 22:58:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:17.389 22:58:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:17.389 22:58:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:17.389 22:58:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:17.390 22:58:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:17.390 22:58:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:17.390 22:58:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:17.390 22:58:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:17.390 22:58:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:17.390 22:58:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:17.390 22:58:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:17.390 22:58:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:17.390 22:58:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:17.390 22:58:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:17.390 22:58:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:17.390 22:58:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:17.390 22:58:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:17.390 22:58:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:17.390 22:58:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:17.390 22:58:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:17.390 22:58:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:17.390 22:58:09 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:17.390 22:58:09 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:07:17.390 22:58:09 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:17.390 00:07:17.390 real 0m1.413s 00:07:17.390 user 0m1.250s 00:07:17.390 sys 0m0.169s 00:07:17.390 22:58:09 accel.accel_dif_verify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:17.390 22:58:09 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:07:17.390 ************************************ 00:07:17.390 END TEST accel_dif_verify 00:07:17.390 ************************************ 00:07:17.390 22:58:09 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:17.390 22:58:09 accel -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:07:17.390 22:58:09 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:17.390 22:58:09 accel -- common/autotest_common.sh@10 -- # set +x 00:07:17.649 ************************************ 00:07:17.649 START TEST accel_dif_generate 00:07:17.649 ************************************ 00:07:17.650 22:58:09 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dif_generate 00:07:17.650 22:58:09 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:07:17.650 22:58:09 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:07:17.650 22:58:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:17.650 22:58:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:17.650 22:58:09 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:17.650 22:58:09 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:17.650 22:58:09 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:07:17.650 22:58:09 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:17.650 22:58:09 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:17.650 22:58:09 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:17.650 22:58:09 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:17.650 22:58:09 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:17.650 22:58:09 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:07:17.650 22:58:09 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:07:17.650 [2024-06-07 22:58:09.692054] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:07:17.650 [2024-06-07 22:58:09.692130] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4145223 ] 00:07:17.650 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.650 [2024-06-07 22:58:09.807236] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.650 [2024-06-07 22:58:09.893997] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.909 22:58:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:17.909 22:58:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:17.909 22:58:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:17.909 22:58:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:17.909 22:58:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:17.909 22:58:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:17.909 22:58:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:17.909 22:58:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:17.909 22:58:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:07:17.909 22:58:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:17.909 22:58:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:17.909 22:58:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:17.909 22:58:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:17.909 22:58:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:17.909 22:58:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:17.909 22:58:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:17.909 22:58:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:17.909 22:58:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:17.909 22:58:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:17.909 22:58:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:17.909 22:58:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:07:17.909 22:58:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:17.909 22:58:09 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:07:17.909 22:58:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:17.909 22:58:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:17.909 22:58:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:17.909 22:58:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:17.909 22:58:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:17.909 22:58:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:17.909 22:58:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:17.909 22:58:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:17.909 22:58:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:17.909 22:58:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:17.909 22:58:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:07:17.909 22:58:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:17.909 22:58:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:17.909 22:58:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:17.909 22:58:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:07:17.909 22:58:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:17.909 22:58:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:17.909 22:58:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:17.909 22:58:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:17.909 22:58:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:17.909 22:58:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:17.909 22:58:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:17.909 22:58:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:07:17.909 22:58:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:17.909 22:58:09 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:07:17.909 22:58:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:17.909 22:58:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:17.909 22:58:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:17.909 22:58:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:17.909 22:58:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:17.909 22:58:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:17.909 22:58:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:17.909 22:58:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:17.909 22:58:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:17.909 22:58:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:17.909 22:58:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:07:17.909 22:58:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:17.909 22:58:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:17.909 22:58:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:17.909 22:58:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:07:17.909 22:58:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:17.909 22:58:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:17.909 22:58:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:17.909 22:58:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:07:17.909 22:58:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:17.909 22:58:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:17.909 22:58:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:17.909 22:58:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:17.909 22:58:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:17.909 22:58:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:17.909 22:58:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:17.909 22:58:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:17.909 22:58:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:17.910 22:58:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:17.910 22:58:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:18.848 22:58:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:18.848 22:58:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:18.848 22:58:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:18.848 22:58:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:18.848 22:58:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:18.848 22:58:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:18.848 22:58:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:18.848 22:58:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:18.848 22:58:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:18.848 22:58:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:18.848 22:58:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:18.848 22:58:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:18.848 22:58:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:18.848 22:58:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:18.848 22:58:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:18.848 22:58:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:18.848 22:58:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:18.848 22:58:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:18.848 22:58:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:18.848 22:58:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:18.848 22:58:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:18.848 22:58:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:18.848 22:58:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:18.848 22:58:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:18.848 22:58:11 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:18.848 22:58:11 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:07:18.848 22:58:11 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:18.848 00:07:18.848 real 0m1.409s 00:07:18.848 user 0m1.238s 00:07:18.848 sys 0m0.177s 00:07:18.848 22:58:11 accel.accel_dif_generate -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:18.848 22:58:11 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:07:18.848 ************************************ 00:07:18.848 END TEST accel_dif_generate 00:07:18.848 ************************************ 00:07:18.848 22:58:11 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:18.848 22:58:11 accel -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:07:18.848 22:58:11 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:18.848 22:58:11 accel -- common/autotest_common.sh@10 -- # set +x 00:07:19.108 ************************************ 00:07:19.108 START TEST accel_dif_generate_copy 00:07:19.108 ************************************ 00:07:19.108 22:58:11 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dif_generate_copy 00:07:19.108 22:58:11 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:19.108 22:58:11 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:07:19.108 22:58:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:19.108 22:58:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:19.108 22:58:11 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:19.108 22:58:11 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:19.108 22:58:11 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:19.108 22:58:11 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:19.108 22:58:11 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:19.108 22:58:11 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.108 22:58:11 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.108 22:58:11 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:19.108 22:58:11 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:19.108 22:58:11 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:07:19.108 [2024-06-07 22:58:11.175612] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:07:19.108 [2024-06-07 22:58:11.175690] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4145476 ] 00:07:19.108 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.108 [2024-06-07 22:58:11.292234] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.108 [2024-06-07 22:58:11.377972] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.368 22:58:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:19.368 22:58:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:19.368 22:58:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:19.368 22:58:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:19.368 22:58:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:19.368 22:58:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:19.368 22:58:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:19.368 22:58:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:19.368 22:58:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:07:19.368 22:58:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:19.368 22:58:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:19.368 22:58:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:19.368 22:58:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:19.368 22:58:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:19.368 22:58:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:19.368 22:58:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:19.368 22:58:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:19.368 22:58:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:19.368 22:58:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:19.368 22:58:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:19.368 22:58:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:07:19.368 22:58:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:19.368 22:58:11 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:07:19.368 22:58:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:19.368 22:58:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:19.368 22:58:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:19.368 22:58:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:19.368 22:58:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:19.368 22:58:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:19.368 22:58:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:19.368 22:58:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:19.368 22:58:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:19.368 22:58:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:19.368 22:58:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:19.368 22:58:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:19.368 22:58:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:19.368 22:58:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:19.368 22:58:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:07:19.368 22:58:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:19.368 22:58:11 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:19.368 22:58:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:19.368 22:58:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:19.368 22:58:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:19.368 22:58:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:19.368 22:58:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:19.368 22:58:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:19.368 22:58:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:19.368 22:58:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:19.368 22:58:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:19.368 22:58:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:19.368 22:58:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:07:19.368 22:58:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:19.368 22:58:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:19.368 22:58:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:19.368 22:58:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:19.368 22:58:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:19.368 22:58:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:19.368 22:58:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:19.368 22:58:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:07:19.368 22:58:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:19.368 22:58:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:19.368 22:58:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:19.368 22:58:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:19.368 22:58:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:19.368 22:58:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:19.368 22:58:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:19.368 22:58:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:19.368 22:58:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:19.368 22:58:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:19.368 22:58:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:20.306 22:58:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:20.306 22:58:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:20.306 22:58:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:20.306 22:58:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:20.306 22:58:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:20.306 22:58:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:20.306 22:58:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:20.306 22:58:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:20.306 22:58:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:20.306 22:58:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:20.306 22:58:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:20.306 22:58:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:20.306 22:58:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:20.306 22:58:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:20.306 22:58:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:20.306 22:58:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:20.306 22:58:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:20.306 22:58:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:20.306 22:58:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:20.306 22:58:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:20.306 22:58:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:20.306 22:58:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:20.306 22:58:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:20.306 22:58:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:20.306 22:58:12 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:20.306 22:58:12 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:07:20.306 22:58:12 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:20.306 00:07:20.306 real 0m1.410s 00:07:20.306 user 0m1.249s 00:07:20.306 sys 0m0.166s 00:07:20.306 22:58:12 accel.accel_dif_generate_copy -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:20.306 22:58:12 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:07:20.306 ************************************ 00:07:20.306 END TEST accel_dif_generate_copy 00:07:20.306 ************************************ 00:07:20.566 22:58:12 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:07:20.566 22:58:12 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:07:20.566 22:58:12 accel -- common/autotest_common.sh@1100 -- # '[' 8 -le 1 ']' 00:07:20.566 22:58:12 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:20.566 22:58:12 accel -- common/autotest_common.sh@10 -- # set +x 00:07:20.566 ************************************ 00:07:20.566 START TEST accel_comp 00:07:20.566 ************************************ 00:07:20.566 22:58:12 accel.accel_comp -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:07:20.566 22:58:12 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:07:20.566 22:58:12 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:07:20.566 22:58:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:20.566 22:58:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:20.566 22:58:12 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:07:20.566 22:58:12 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:07:20.566 22:58:12 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:07:20.566 22:58:12 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:20.566 22:58:12 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:20.566 22:58:12 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:20.566 22:58:12 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:20.566 22:58:12 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:20.566 22:58:12 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:07:20.566 22:58:12 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:07:20.566 [2024-06-07 22:58:12.662737] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:07:20.566 [2024-06-07 22:58:12.662809] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4145729 ] 00:07:20.566 EAL: No free 2048 kB hugepages reported on node 1 00:07:20.566 [2024-06-07 22:58:12.779046] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.826 [2024-06-07 22:58:12.869270] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.826 22:58:12 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:20.826 22:58:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:20.826 22:58:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:20.826 22:58:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:20.826 22:58:12 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:20.826 22:58:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:20.826 22:58:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:20.826 22:58:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:20.826 22:58:12 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:20.826 22:58:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:20.826 22:58:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:20.826 22:58:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:20.826 22:58:12 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:07:20.826 22:58:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:20.826 22:58:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:20.826 22:58:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:20.826 22:58:12 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:20.826 22:58:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:20.826 22:58:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:20.826 22:58:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:20.826 22:58:12 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:20.826 22:58:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:20.826 22:58:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:20.826 22:58:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:20.826 22:58:12 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:07:20.826 22:58:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:20.826 22:58:12 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:07:20.826 22:58:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:20.826 22:58:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:20.826 22:58:12 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:20.826 22:58:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:20.826 22:58:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:20.826 22:58:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:20.826 22:58:12 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:20.826 22:58:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:20.826 22:58:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:20.826 22:58:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:20.826 22:58:12 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:07:20.826 22:58:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:20.826 22:58:12 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:07:20.826 22:58:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:20.826 22:58:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:20.826 22:58:12 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:07:20.826 22:58:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:20.826 22:58:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:20.826 22:58:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:20.826 22:58:12 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:20.826 22:58:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:20.826 22:58:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:20.826 22:58:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:20.826 22:58:12 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:20.826 22:58:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:20.826 22:58:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:20.826 22:58:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:20.826 22:58:12 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:07:20.826 22:58:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:20.826 22:58:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:20.826 22:58:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:20.826 22:58:12 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:20.826 22:58:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:20.826 22:58:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:20.826 22:58:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:20.826 22:58:12 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:07:20.826 22:58:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:20.826 22:58:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:20.826 22:58:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:20.826 22:58:12 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:20.826 22:58:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:20.826 22:58:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:20.826 22:58:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:20.826 22:58:12 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:20.826 22:58:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:20.826 22:58:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:20.826 22:58:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:22.205 22:58:14 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:22.205 22:58:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.205 22:58:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:22.205 22:58:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:22.205 22:58:14 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:22.205 22:58:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.205 22:58:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:22.205 22:58:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:22.205 22:58:14 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:22.205 22:58:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.205 22:58:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:22.205 22:58:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:22.205 22:58:14 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:22.205 22:58:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.205 22:58:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:22.205 22:58:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:22.205 22:58:14 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:22.205 22:58:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.205 22:58:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:22.205 22:58:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:22.205 22:58:14 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:22.205 22:58:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.205 22:58:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:22.205 22:58:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:22.205 22:58:14 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:22.205 22:58:14 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:22.205 22:58:14 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:22.205 00:07:22.205 real 0m1.422s 00:07:22.205 user 0m1.262s 00:07:22.205 sys 0m0.174s 00:07:22.205 22:58:14 accel.accel_comp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:22.205 22:58:14 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:07:22.205 ************************************ 00:07:22.205 END TEST accel_comp 00:07:22.205 ************************************ 00:07:22.205 22:58:14 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:07:22.205 22:58:14 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:07:22.205 22:58:14 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:22.205 22:58:14 accel -- common/autotest_common.sh@10 -- # set +x 00:07:22.205 ************************************ 00:07:22.205 START TEST accel_decomp 00:07:22.205 ************************************ 00:07:22.205 22:58:14 accel.accel_decomp -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:07:22.205 22:58:14 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:07:22.205 22:58:14 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:07:22.205 22:58:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:22.205 22:58:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:22.206 22:58:14 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:07:22.206 22:58:14 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:07:22.206 22:58:14 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:07:22.206 22:58:14 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:22.206 22:58:14 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:22.206 22:58:14 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:22.206 22:58:14 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:22.206 22:58:14 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:22.206 22:58:14 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:07:22.206 22:58:14 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:07:22.206 [2024-06-07 22:58:14.165699] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:07:22.206 [2024-06-07 22:58:14.165783] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4145999 ] 00:07:22.206 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.206 [2024-06-07 22:58:14.282514] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.206 [2024-06-07 22:58:14.368459] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.206 22:58:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:22.206 22:58:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.206 22:58:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:22.206 22:58:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:22.206 22:58:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:22.206 22:58:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.206 22:58:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:22.206 22:58:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:22.206 22:58:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:22.206 22:58:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.206 22:58:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:22.206 22:58:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:22.206 22:58:14 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:07:22.206 22:58:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.206 22:58:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:22.206 22:58:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:22.206 22:58:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:22.206 22:58:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.206 22:58:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:22.206 22:58:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:22.206 22:58:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:22.206 22:58:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.206 22:58:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:22.206 22:58:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:22.206 22:58:14 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:07:22.206 22:58:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.206 22:58:14 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:22.206 22:58:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:22.206 22:58:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:22.206 22:58:14 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:22.206 22:58:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.206 22:58:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:22.206 22:58:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:22.206 22:58:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:22.206 22:58:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.206 22:58:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:22.206 22:58:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:22.206 22:58:14 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:07:22.206 22:58:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.206 22:58:14 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:07:22.206 22:58:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:22.206 22:58:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:22.206 22:58:14 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:07:22.206 22:58:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.206 22:58:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:22.206 22:58:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:22.206 22:58:14 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:22.206 22:58:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.206 22:58:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:22.206 22:58:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:22.206 22:58:14 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:22.206 22:58:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.206 22:58:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:22.206 22:58:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:22.206 22:58:14 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:07:22.206 22:58:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.206 22:58:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:22.206 22:58:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:22.206 22:58:14 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:22.206 22:58:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.206 22:58:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:22.206 22:58:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:22.206 22:58:14 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:07:22.206 22:58:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.206 22:58:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:22.206 22:58:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:22.206 22:58:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:22.206 22:58:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.206 22:58:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:22.206 22:58:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:22.206 22:58:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:22.206 22:58:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.206 22:58:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:22.206 22:58:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:23.585 22:58:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:23.585 22:58:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.585 22:58:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:23.585 22:58:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:23.585 22:58:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:23.585 22:58:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.585 22:58:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:23.585 22:58:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:23.585 22:58:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:23.585 22:58:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.585 22:58:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:23.585 22:58:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:23.585 22:58:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:23.585 22:58:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.585 22:58:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:23.585 22:58:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:23.585 22:58:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:23.585 22:58:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.585 22:58:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:23.585 22:58:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:23.585 22:58:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:23.585 22:58:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.585 22:58:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:23.585 22:58:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:23.585 22:58:15 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:23.585 22:58:15 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:23.585 22:58:15 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:23.585 00:07:23.585 real 0m1.414s 00:07:23.585 user 0m1.254s 00:07:23.585 sys 0m0.175s 00:07:23.585 22:58:15 accel.accel_decomp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:23.585 22:58:15 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:07:23.585 ************************************ 00:07:23.585 END TEST accel_decomp 00:07:23.585 ************************************ 00:07:23.585 22:58:15 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:23.585 22:58:15 accel -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:07:23.585 22:58:15 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:23.585 22:58:15 accel -- common/autotest_common.sh@10 -- # set +x 00:07:23.585 ************************************ 00:07:23.585 START TEST accel_decomp_full 00:07:23.585 ************************************ 00:07:23.585 22:58:15 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:23.585 22:58:15 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:07:23.585 22:58:15 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:07:23.585 22:58:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:23.585 22:58:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:23.585 22:58:15 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:23.585 22:58:15 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:23.585 22:58:15 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:07:23.585 22:58:15 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:23.585 22:58:15 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:23.585 22:58:15 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:23.585 22:58:15 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:23.585 22:58:15 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:23.585 22:58:15 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:07:23.585 22:58:15 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:07:23.585 [2024-06-07 22:58:15.663825] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:07:23.585 [2024-06-07 22:58:15.663901] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4146253 ] 00:07:23.585 EAL: No free 2048 kB hugepages reported on node 1 00:07:23.585 [2024-06-07 22:58:15.779019] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.845 [2024-06-07 22:58:15.869255] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.845 22:58:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:23.845 22:58:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:23.845 22:58:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:23.845 22:58:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:23.845 22:58:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:23.845 22:58:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:23.845 22:58:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:23.845 22:58:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:23.845 22:58:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:23.845 22:58:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:23.845 22:58:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:23.845 22:58:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:23.845 22:58:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:07:23.845 22:58:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:23.845 22:58:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:23.845 22:58:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:23.845 22:58:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:23.845 22:58:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:23.845 22:58:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:23.845 22:58:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:23.845 22:58:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:23.845 22:58:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:23.845 22:58:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:23.845 22:58:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:23.845 22:58:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:07:23.845 22:58:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:23.845 22:58:15 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:23.845 22:58:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:23.845 22:58:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:23.845 22:58:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:23.845 22:58:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:23.845 22:58:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:23.845 22:58:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:23.845 22:58:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:23.845 22:58:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:23.845 22:58:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:23.845 22:58:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:23.845 22:58:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:07:23.845 22:58:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:23.845 22:58:15 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:07:23.845 22:58:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:23.845 22:58:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:23.845 22:58:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:07:23.845 22:58:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:23.845 22:58:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:23.845 22:58:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:23.845 22:58:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:23.845 22:58:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:23.845 22:58:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:23.845 22:58:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:23.845 22:58:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:23.845 22:58:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:23.845 22:58:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:23.845 22:58:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:23.845 22:58:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:07:23.845 22:58:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:23.845 22:58:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:23.845 22:58:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:23.845 22:58:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:23.845 22:58:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:23.845 22:58:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:23.845 22:58:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:23.845 22:58:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:07:23.845 22:58:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:23.845 22:58:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:23.845 22:58:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:23.845 22:58:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:23.845 22:58:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:23.845 22:58:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:23.845 22:58:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:23.845 22:58:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:23.845 22:58:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:23.845 22:58:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:23.845 22:58:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:25.225 22:58:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:25.225 22:58:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:25.225 22:58:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:25.225 22:58:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:25.225 22:58:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:25.225 22:58:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:25.225 22:58:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:25.225 22:58:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:25.225 22:58:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:25.225 22:58:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:25.225 22:58:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:25.225 22:58:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:25.225 22:58:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:25.225 22:58:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:25.225 22:58:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:25.225 22:58:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:25.225 22:58:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:25.225 22:58:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:25.225 22:58:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:25.225 22:58:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:25.225 22:58:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:25.225 22:58:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:25.225 22:58:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:25.225 22:58:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:25.225 22:58:17 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:25.225 22:58:17 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:25.225 22:58:17 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:25.225 00:07:25.225 real 0m1.429s 00:07:25.225 user 0m1.262s 00:07:25.225 sys 0m0.181s 00:07:25.225 22:58:17 accel.accel_decomp_full -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:25.225 22:58:17 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:07:25.225 ************************************ 00:07:25.225 END TEST accel_decomp_full 00:07:25.225 ************************************ 00:07:25.225 22:58:17 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:25.225 22:58:17 accel -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:07:25.225 22:58:17 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:25.225 22:58:17 accel -- common/autotest_common.sh@10 -- # set +x 00:07:25.225 ************************************ 00:07:25.225 START TEST accel_decomp_mcore 00:07:25.225 ************************************ 00:07:25.225 22:58:17 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:25.225 22:58:17 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:25.225 22:58:17 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:25.226 22:58:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.226 22:58:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.226 22:58:17 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:25.226 22:58:17 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:25.226 22:58:17 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:25.226 22:58:17 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:25.226 22:58:17 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:25.226 22:58:17 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:25.226 22:58:17 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.226 22:58:17 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:25.226 22:58:17 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:25.226 22:58:17 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:25.226 [2024-06-07 22:58:17.171932] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:07:25.226 [2024-06-07 22:58:17.172017] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4146521 ] 00:07:25.226 EAL: No free 2048 kB hugepages reported on node 1 00:07:25.226 [2024-06-07 22:58:17.290151] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:25.226 [2024-06-07 22:58:17.380464] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:07:25.226 [2024-06-07 22:58:17.380558] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:07:25.226 [2024-06-07 22:58:17.380677] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:07:25.226 [2024-06-07 22:58:17.380679] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.226 22:58:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:25.226 22:58:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.226 22:58:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.226 22:58:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.226 22:58:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:25.226 22:58:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.226 22:58:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.226 22:58:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.226 22:58:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:25.226 22:58:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.226 22:58:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.226 22:58:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.226 22:58:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:25.226 22:58:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.226 22:58:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.226 22:58:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.226 22:58:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:25.226 22:58:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.226 22:58:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.226 22:58:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.226 22:58:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:25.226 22:58:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.226 22:58:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.226 22:58:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.226 22:58:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:25.226 22:58:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.226 22:58:17 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:25.226 22:58:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.226 22:58:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.226 22:58:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:25.226 22:58:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.226 22:58:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.226 22:58:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.226 22:58:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:25.226 22:58:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.226 22:58:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.226 22:58:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.226 22:58:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:07:25.226 22:58:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.226 22:58:17 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:25.226 22:58:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.226 22:58:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.226 22:58:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:07:25.226 22:58:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.226 22:58:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.226 22:58:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.226 22:58:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:25.226 22:58:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.226 22:58:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.226 22:58:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.226 22:58:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:25.226 22:58:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.226 22:58:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.226 22:58:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.226 22:58:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:25.226 22:58:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.226 22:58:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.226 22:58:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.226 22:58:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:25.226 22:58:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.226 22:58:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.226 22:58:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.226 22:58:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:25.226 22:58:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.226 22:58:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.226 22:58:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.226 22:58:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:25.226 22:58:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.226 22:58:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.226 22:58:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.226 22:58:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:25.226 22:58:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.226 22:58:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.226 22:58:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:26.605 22:58:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:26.605 22:58:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:26.605 22:58:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:26.605 22:58:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:26.605 22:58:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:26.605 22:58:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:26.605 22:58:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:26.605 22:58:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:26.605 22:58:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:26.605 22:58:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:26.605 22:58:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:26.605 22:58:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:26.605 22:58:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:26.605 22:58:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:26.605 22:58:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:26.605 22:58:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:26.605 22:58:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:26.605 22:58:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:26.605 22:58:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:26.605 22:58:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:26.605 22:58:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:26.605 22:58:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:26.605 22:58:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:26.605 22:58:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:26.605 22:58:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:26.605 22:58:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:26.605 22:58:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:26.605 22:58:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:26.605 22:58:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:26.605 22:58:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:26.605 22:58:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:26.605 22:58:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:26.605 22:58:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:26.605 22:58:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:26.605 22:58:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:26.605 22:58:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:26.605 22:58:18 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:26.605 22:58:18 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:26.605 22:58:18 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:26.605 00:07:26.605 real 0m1.432s 00:07:26.605 user 0m4.598s 00:07:26.605 sys 0m0.191s 00:07:26.605 22:58:18 accel.accel_decomp_mcore -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:26.605 22:58:18 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:26.605 ************************************ 00:07:26.605 END TEST accel_decomp_mcore 00:07:26.605 ************************************ 00:07:26.606 22:58:18 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:26.606 22:58:18 accel -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:07:26.606 22:58:18 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:26.606 22:58:18 accel -- common/autotest_common.sh@10 -- # set +x 00:07:26.606 ************************************ 00:07:26.606 START TEST accel_decomp_full_mcore 00:07:26.606 ************************************ 00:07:26.606 22:58:18 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:26.606 22:58:18 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:26.606 22:58:18 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:26.606 22:58:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:26.606 22:58:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:26.606 22:58:18 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:26.606 22:58:18 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:26.606 22:58:18 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:26.606 22:58:18 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:26.606 22:58:18 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:26.606 22:58:18 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:26.606 22:58:18 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:26.606 22:58:18 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:26.606 22:58:18 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:26.606 22:58:18 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:26.606 [2024-06-07 22:58:18.685333] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:07:26.606 [2024-06-07 22:58:18.685409] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4146783 ] 00:07:26.606 EAL: No free 2048 kB hugepages reported on node 1 00:07:26.606 [2024-06-07 22:58:18.802890] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:26.865 [2024-06-07 22:58:18.894507] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:07:26.865 [2024-06-07 22:58:18.894608] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:07:26.865 [2024-06-07 22:58:18.894703] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:07:26.865 [2024-06-07 22:58:18.894705] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.865 22:58:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:26.865 22:58:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:26.865 22:58:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:26.865 22:58:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:26.865 22:58:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:26.865 22:58:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:26.865 22:58:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:26.865 22:58:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:26.865 22:58:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:26.865 22:58:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:26.865 22:58:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:26.865 22:58:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:26.865 22:58:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:26.865 22:58:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:26.865 22:58:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:26.865 22:58:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:26.865 22:58:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:26.865 22:58:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:26.865 22:58:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:26.865 22:58:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:26.865 22:58:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:26.865 22:58:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:26.865 22:58:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:26.865 22:58:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:26.865 22:58:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:26.865 22:58:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:26.865 22:58:18 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:26.865 22:58:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:26.865 22:58:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:26.865 22:58:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:26.865 22:58:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:26.866 22:58:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:26.866 22:58:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:26.866 22:58:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:26.866 22:58:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:26.866 22:58:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:26.866 22:58:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:26.866 22:58:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:07:26.866 22:58:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:26.866 22:58:18 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:26.866 22:58:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:26.866 22:58:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:26.866 22:58:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:07:26.866 22:58:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:26.866 22:58:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:26.866 22:58:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:26.866 22:58:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:26.866 22:58:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:26.866 22:58:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:26.866 22:58:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:26.866 22:58:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:26.866 22:58:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:26.866 22:58:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:26.866 22:58:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:26.866 22:58:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:26.866 22:58:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:26.866 22:58:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:26.866 22:58:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:26.866 22:58:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:26.866 22:58:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:26.866 22:58:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:26.866 22:58:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:26.866 22:58:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:26.866 22:58:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:26.866 22:58:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:26.866 22:58:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:26.866 22:58:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:26.866 22:58:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:26.866 22:58:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:26.866 22:58:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:26.866 22:58:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:26.866 22:58:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:26.866 22:58:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:26.866 22:58:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.245 22:58:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:28.245 22:58:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.245 22:58:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.245 22:58:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.245 22:58:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:28.245 22:58:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.245 22:58:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.245 22:58:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.245 22:58:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:28.245 22:58:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.245 22:58:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.245 22:58:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.245 22:58:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:28.245 22:58:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.245 22:58:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.245 22:58:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.245 22:58:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:28.245 22:58:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.245 22:58:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.245 22:58:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.245 22:58:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:28.245 22:58:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.245 22:58:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.245 22:58:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.246 22:58:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:28.246 22:58:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.246 22:58:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.246 22:58:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.246 22:58:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:28.246 22:58:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.246 22:58:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.246 22:58:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.246 22:58:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:28.246 22:58:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.246 22:58:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.246 22:58:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.246 22:58:20 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:28.246 22:58:20 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:28.246 22:58:20 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:28.246 00:07:28.246 real 0m1.450s 00:07:28.246 user 0m4.660s 00:07:28.246 sys 0m0.195s 00:07:28.246 22:58:20 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:28.246 22:58:20 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:28.246 ************************************ 00:07:28.246 END TEST accel_decomp_full_mcore 00:07:28.246 ************************************ 00:07:28.246 22:58:20 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:28.246 22:58:20 accel -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:07:28.246 22:58:20 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:28.246 22:58:20 accel -- common/autotest_common.sh@10 -- # set +x 00:07:28.246 ************************************ 00:07:28.246 START TEST accel_decomp_mthread 00:07:28.246 ************************************ 00:07:28.246 22:58:20 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:28.246 22:58:20 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:28.246 22:58:20 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:28.246 22:58:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:28.246 22:58:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:28.246 22:58:20 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:28.246 22:58:20 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:28.246 22:58:20 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:28.246 22:58:20 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:28.246 22:58:20 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:28.246 22:58:20 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:28.246 22:58:20 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:28.246 22:58:20 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:28.246 22:58:20 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:28.246 22:58:20 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:28.246 [2024-06-07 22:58:20.212698] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:07:28.246 [2024-06-07 22:58:20.212772] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4147057 ] 00:07:28.246 EAL: No free 2048 kB hugepages reported on node 1 00:07:28.246 [2024-06-07 22:58:20.327954] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.246 [2024-06-07 22:58:20.413742] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.246 22:58:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:28.246 22:58:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:28.246 22:58:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:28.246 22:58:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:28.246 22:58:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:28.246 22:58:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:28.246 22:58:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:28.246 22:58:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:28.246 22:58:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:28.246 22:58:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:28.246 22:58:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:28.246 22:58:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:28.246 22:58:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:28.246 22:58:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:28.246 22:58:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:28.246 22:58:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:28.246 22:58:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:28.246 22:58:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:28.246 22:58:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:28.246 22:58:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:28.246 22:58:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:28.246 22:58:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:28.246 22:58:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:28.246 22:58:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:28.246 22:58:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:28.246 22:58:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:28.246 22:58:20 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:28.246 22:58:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:28.246 22:58:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:28.246 22:58:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:28.246 22:58:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:28.246 22:58:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:28.246 22:58:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:28.246 22:58:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:28.246 22:58:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:28.246 22:58:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:28.246 22:58:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:28.246 22:58:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:07:28.246 22:58:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:28.246 22:58:20 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:28.246 22:58:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:28.246 22:58:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:28.246 22:58:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:07:28.246 22:58:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:28.246 22:58:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:28.246 22:58:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:28.246 22:58:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:28.246 22:58:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:28.246 22:58:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:28.246 22:58:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:28.246 22:58:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:28.246 22:58:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:28.246 22:58:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:28.246 22:58:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:28.246 22:58:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:28.246 22:58:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:28.246 22:58:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:28.246 22:58:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:28.246 22:58:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:28.246 22:58:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:28.246 22:58:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:28.246 22:58:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:28.246 22:58:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:28.246 22:58:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:28.246 22:58:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:28.246 22:58:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:28.246 22:58:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:28.246 22:58:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:28.246 22:58:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:28.246 22:58:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:28.246 22:58:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:28.246 22:58:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:28.246 22:58:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:28.246 22:58:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.626 22:58:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:29.626 22:58:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.626 22:58:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.626 22:58:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.626 22:58:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:29.626 22:58:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.626 22:58:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.626 22:58:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.626 22:58:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:29.626 22:58:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.626 22:58:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.626 22:58:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.626 22:58:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:29.626 22:58:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.626 22:58:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.626 22:58:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.626 22:58:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:29.626 22:58:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.626 22:58:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.626 22:58:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.626 22:58:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:29.626 22:58:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.626 22:58:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.626 22:58:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.626 22:58:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:29.626 22:58:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.626 22:58:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.626 22:58:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.626 22:58:21 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:29.626 22:58:21 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:29.626 22:58:21 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:29.626 00:07:29.626 real 0m1.419s 00:07:29.626 user 0m1.257s 00:07:29.626 sys 0m0.176s 00:07:29.626 22:58:21 accel.accel_decomp_mthread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:29.626 22:58:21 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:29.626 ************************************ 00:07:29.626 END TEST accel_decomp_mthread 00:07:29.626 ************************************ 00:07:29.626 22:58:21 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:29.626 22:58:21 accel -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:07:29.626 22:58:21 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:29.626 22:58:21 accel -- common/autotest_common.sh@10 -- # set +x 00:07:29.626 ************************************ 00:07:29.626 START TEST accel_decomp_full_mthread 00:07:29.626 ************************************ 00:07:29.626 22:58:21 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:29.626 22:58:21 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:29.626 22:58:21 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:29.626 22:58:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.626 22:58:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.626 22:58:21 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:29.626 22:58:21 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:29.626 22:58:21 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:29.626 22:58:21 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:29.626 22:58:21 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:29.626 22:58:21 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:29.626 22:58:21 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:29.626 22:58:21 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:29.626 22:58:21 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:29.626 22:58:21 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:29.626 [2024-06-07 22:58:21.712850] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:07:29.626 [2024-06-07 22:58:21.712933] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4147313 ] 00:07:29.626 EAL: No free 2048 kB hugepages reported on node 1 00:07:29.626 [2024-06-07 22:58:21.832710] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.886 [2024-06-07 22:58:21.923009] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.886 22:58:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:29.886 22:58:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.886 22:58:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.886 22:58:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.886 22:58:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:29.886 22:58:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.886 22:58:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.886 22:58:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.886 22:58:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:29.886 22:58:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.886 22:58:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.886 22:58:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.886 22:58:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:29.886 22:58:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.886 22:58:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.886 22:58:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.886 22:58:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:29.886 22:58:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.886 22:58:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.886 22:58:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.886 22:58:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:29.886 22:58:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.886 22:58:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.886 22:58:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.886 22:58:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:29.886 22:58:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.886 22:58:21 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:29.886 22:58:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.886 22:58:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.886 22:58:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:29.886 22:58:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.886 22:58:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.886 22:58:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.886 22:58:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:29.886 22:58:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.886 22:58:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.886 22:58:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.886 22:58:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:29.886 22:58:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.886 22:58:21 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:29.886 22:58:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.886 22:58:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.886 22:58:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:07:29.886 22:58:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.886 22:58:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.886 22:58:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.886 22:58:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:29.886 22:58:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.886 22:58:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.886 22:58:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.886 22:58:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:29.887 22:58:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.887 22:58:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.887 22:58:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.887 22:58:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:29.887 22:58:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.887 22:58:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.887 22:58:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.887 22:58:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:29.887 22:58:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.887 22:58:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.887 22:58:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.887 22:58:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:29.887 22:58:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.887 22:58:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.887 22:58:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.887 22:58:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:29.887 22:58:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.887 22:58:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.887 22:58:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.887 22:58:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:29.887 22:58:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.887 22:58:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.887 22:58:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.266 22:58:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:31.266 22:58:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.266 22:58:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.266 22:58:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.266 22:58:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:31.266 22:58:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.266 22:58:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.266 22:58:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.266 22:58:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:31.266 22:58:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.266 22:58:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.266 22:58:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.266 22:58:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:31.266 22:58:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.266 22:58:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.266 22:58:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.266 22:58:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:31.266 22:58:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.266 22:58:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.266 22:58:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.266 22:58:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:31.266 22:58:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.266 22:58:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.266 22:58:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.266 22:58:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:31.266 22:58:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.266 22:58:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.266 22:58:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.266 22:58:23 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:31.266 22:58:23 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:31.266 22:58:23 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:31.266 00:07:31.266 real 0m1.454s 00:07:31.266 user 0m1.291s 00:07:31.266 sys 0m0.175s 00:07:31.267 22:58:23 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:31.267 22:58:23 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:31.267 ************************************ 00:07:31.267 END TEST accel_decomp_full_mthread 00:07:31.267 ************************************ 00:07:31.267 22:58:23 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:31.267 22:58:23 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:31.267 22:58:23 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:31.267 22:58:23 accel -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:07:31.267 22:58:23 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:31.267 22:58:23 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:31.267 22:58:23 accel -- common/autotest_common.sh@10 -- # set +x 00:07:31.267 22:58:23 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:31.267 22:58:23 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:31.267 22:58:23 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:31.267 22:58:23 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:31.267 22:58:23 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:31.267 22:58:23 accel -- accel/accel.sh@41 -- # jq -r . 00:07:31.267 ************************************ 00:07:31.267 START TEST accel_dif_functional_tests 00:07:31.267 ************************************ 00:07:31.267 22:58:23 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:31.267 [2024-06-07 22:58:23.251635] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:07:31.267 [2024-06-07 22:58:23.251718] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4147591 ] 00:07:31.267 EAL: No free 2048 kB hugepages reported on node 1 00:07:31.267 [2024-06-07 22:58:23.369902] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:31.267 [2024-06-07 22:58:23.460353] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:07:31.267 [2024-06-07 22:58:23.460446] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:07:31.267 [2024-06-07 22:58:23.460450] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.267 00:07:31.267 00:07:31.267 CUnit - A unit testing framework for C - Version 2.1-3 00:07:31.267 http://cunit.sourceforge.net/ 00:07:31.267 00:07:31.267 00:07:31.267 Suite: accel_dif 00:07:31.267 Test: verify: DIF generated, GUARD check ...passed 00:07:31.267 Test: verify: DIF generated, APPTAG check ...passed 00:07:31.267 Test: verify: DIF generated, REFTAG check ...passed 00:07:31.267 Test: verify: DIF not generated, GUARD check ...[2024-06-07 22:58:23.535022] dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:31.267 passed 00:07:31.267 Test: verify: DIF not generated, APPTAG check ...[2024-06-07 22:58:23.535090] dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:31.267 passed 00:07:31.267 Test: verify: DIF not generated, REFTAG check ...[2024-06-07 22:58:23.535125] dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:31.267 passed 00:07:31.267 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:31.267 Test: verify: APPTAG incorrect, APPTAG check ...[2024-06-07 22:58:23.535198] dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:31.267 passed 00:07:31.267 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:31.267 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:31.267 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:31.267 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-06-07 22:58:23.535332] dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:31.267 passed 00:07:31.267 Test: verify copy: DIF generated, GUARD check ...passed 00:07:31.267 Test: verify copy: DIF generated, APPTAG check ...passed 00:07:31.267 Test: verify copy: DIF generated, REFTAG check ...passed 00:07:31.267 Test: verify copy: DIF not generated, GUARD check ...[2024-06-07 22:58:23.535485] dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:31.267 passed 00:07:31.267 Test: verify copy: DIF not generated, APPTAG check ...[2024-06-07 22:58:23.535520] dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:31.267 passed 00:07:31.267 Test: verify copy: DIF not generated, REFTAG check ...[2024-06-07 22:58:23.535556] dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:31.267 passed 00:07:31.267 Test: generate copy: DIF generated, GUARD check ...passed 00:07:31.267 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:31.267 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:31.267 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:31.267 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:31.267 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:31.267 Test: generate copy: iovecs-len validate ...[2024-06-07 22:58:23.535787] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:31.267 passed 00:07:31.267 Test: generate copy: buffer alignment validate ...passed 00:07:31.267 00:07:31.267 Run Summary: Type Total Ran Passed Failed Inactive 00:07:31.267 suites 1 1 n/a 0 0 00:07:31.267 tests 26 26 26 0 0 00:07:31.267 asserts 115 115 115 0 n/a 00:07:31.267 00:07:31.267 Elapsed time = 0.003 seconds 00:07:31.526 00:07:31.526 real 0m0.486s 00:07:31.526 user 0m0.654s 00:07:31.526 sys 0m0.209s 00:07:31.526 22:58:23 accel.accel_dif_functional_tests -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:31.526 22:58:23 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:31.526 ************************************ 00:07:31.526 END TEST accel_dif_functional_tests 00:07:31.526 ************************************ 00:07:31.526 00:07:31.526 real 0m33.256s 00:07:31.526 user 0m35.323s 00:07:31.526 sys 0m6.105s 00:07:31.526 22:58:23 accel -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:31.526 22:58:23 accel -- common/autotest_common.sh@10 -- # set +x 00:07:31.526 ************************************ 00:07:31.526 END TEST accel 00:07:31.527 ************************************ 00:07:31.527 22:58:23 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:31.527 22:58:23 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:31.527 22:58:23 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:31.527 22:58:23 -- common/autotest_common.sh@10 -- # set +x 00:07:31.786 ************************************ 00:07:31.786 START TEST accel_rpc 00:07:31.786 ************************************ 00:07:31.786 22:58:23 accel_rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:31.786 * Looking for test storage... 00:07:31.786 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel 00:07:31.786 22:58:23 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:31.786 22:58:23 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=4147900 00:07:31.786 22:58:23 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 4147900 00:07:31.786 22:58:23 accel_rpc -- common/autotest_common.sh@830 -- # '[' -z 4147900 ']' 00:07:31.786 22:58:23 accel_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:31.786 22:58:23 accel_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:31.786 22:58:23 accel_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:31.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:31.786 22:58:23 accel_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:31.786 22:58:23 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.786 22:58:23 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:31.786 [2024-06-07 22:58:23.942839] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:07:31.786 [2024-06-07 22:58:23.942902] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4147900 ] 00:07:31.786 EAL: No free 2048 kB hugepages reported on node 1 00:07:31.786 [2024-06-07 22:58:24.057973] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.045 [2024-06-07 22:58:24.150025] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.612 22:58:24 accel_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:32.612 22:58:24 accel_rpc -- common/autotest_common.sh@863 -- # return 0 00:07:32.612 22:58:24 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:32.612 22:58:24 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:32.612 22:58:24 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:32.612 22:58:24 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:32.612 22:58:24 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:32.612 22:58:24 accel_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:32.612 22:58:24 accel_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:32.612 22:58:24 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:32.871 ************************************ 00:07:32.871 START TEST accel_assign_opcode 00:07:32.871 ************************************ 00:07:32.871 22:58:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # accel_assign_opcode_test_suite 00:07:32.871 22:58:24 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:32.871 22:58:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:32.871 22:58:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:32.871 [2024-06-07 22:58:24.900308] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:32.871 22:58:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:32.871 22:58:24 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:32.871 22:58:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:32.871 22:58:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:32.871 [2024-06-07 22:58:24.908315] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:32.871 22:58:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:32.871 22:58:24 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:32.871 22:58:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:32.871 22:58:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:32.871 22:58:25 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:32.871 22:58:25 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:32.871 22:58:25 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:32.872 22:58:25 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:32.872 22:58:25 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:32.872 22:58:25 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:32.872 22:58:25 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:32.872 software 00:07:32.872 00:07:32.872 real 0m0.250s 00:07:32.872 user 0m0.044s 00:07:32.872 sys 0m0.014s 00:07:32.872 22:58:25 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:32.872 22:58:25 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:32.872 ************************************ 00:07:32.872 END TEST accel_assign_opcode 00:07:32.872 ************************************ 00:07:33.131 22:58:25 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 4147900 00:07:33.131 22:58:25 accel_rpc -- common/autotest_common.sh@949 -- # '[' -z 4147900 ']' 00:07:33.131 22:58:25 accel_rpc -- common/autotest_common.sh@953 -- # kill -0 4147900 00:07:33.131 22:58:25 accel_rpc -- common/autotest_common.sh@954 -- # uname 00:07:33.131 22:58:25 accel_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:33.131 22:58:25 accel_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4147900 00:07:33.131 22:58:25 accel_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:33.131 22:58:25 accel_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:33.131 22:58:25 accel_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4147900' 00:07:33.131 killing process with pid 4147900 00:07:33.131 22:58:25 accel_rpc -- common/autotest_common.sh@968 -- # kill 4147900 00:07:33.131 22:58:25 accel_rpc -- common/autotest_common.sh@973 -- # wait 4147900 00:07:33.390 00:07:33.390 real 0m1.727s 00:07:33.390 user 0m1.813s 00:07:33.391 sys 0m0.531s 00:07:33.391 22:58:25 accel_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:33.391 22:58:25 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:33.391 ************************************ 00:07:33.391 END TEST accel_rpc 00:07:33.391 ************************************ 00:07:33.391 22:58:25 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:07:33.391 22:58:25 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:33.391 22:58:25 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:33.391 22:58:25 -- common/autotest_common.sh@10 -- # set +x 00:07:33.391 ************************************ 00:07:33.391 START TEST app_cmdline 00:07:33.391 ************************************ 00:07:33.391 22:58:25 app_cmdline -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:07:33.650 * Looking for test storage... 00:07:33.650 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:07:33.650 22:58:25 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:33.650 22:58:25 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=4148248 00:07:33.650 22:58:25 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 4148248 00:07:33.650 22:58:25 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:33.650 22:58:25 app_cmdline -- common/autotest_common.sh@830 -- # '[' -z 4148248 ']' 00:07:33.650 22:58:25 app_cmdline -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:33.650 22:58:25 app_cmdline -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:33.650 22:58:25 app_cmdline -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:33.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:33.650 22:58:25 app_cmdline -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:33.650 22:58:25 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:33.650 [2024-06-07 22:58:25.779065] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:07:33.650 [2024-06-07 22:58:25.779132] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4148248 ] 00:07:33.650 EAL: No free 2048 kB hugepages reported on node 1 00:07:33.650 [2024-06-07 22:58:25.893765] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.910 [2024-06-07 22:58:25.981680] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.478 22:58:26 app_cmdline -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:34.478 22:58:26 app_cmdline -- common/autotest_common.sh@863 -- # return 0 00:07:34.478 22:58:26 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:34.738 { 00:07:34.738 "version": "SPDK v24.09-pre git sha1 86abcfbbd", 00:07:34.738 "fields": { 00:07:34.738 "major": 24, 00:07:34.738 "minor": 9, 00:07:34.738 "patch": 0, 00:07:34.738 "suffix": "-pre", 00:07:34.738 "commit": "86abcfbbd" 00:07:34.738 } 00:07:34.738 } 00:07:34.738 22:58:26 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:34.738 22:58:26 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:34.738 22:58:26 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:34.738 22:58:26 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:34.738 22:58:26 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:34.738 22:58:26 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:34.738 22:58:26 app_cmdline -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:34.738 22:58:26 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:34.738 22:58:26 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:34.738 22:58:26 app_cmdline -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:34.738 22:58:26 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:34.738 22:58:26 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:34.738 22:58:26 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:34.738 22:58:26 app_cmdline -- common/autotest_common.sh@649 -- # local es=0 00:07:34.738 22:58:26 app_cmdline -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:34.738 22:58:26 app_cmdline -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:07:34.738 22:58:26 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:34.738 22:58:26 app_cmdline -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:07:34.738 22:58:26 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:34.738 22:58:26 app_cmdline -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:07:34.738 22:58:26 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:34.738 22:58:26 app_cmdline -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:07:34.738 22:58:26 app_cmdline -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py ]] 00:07:34.738 22:58:26 app_cmdline -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:34.997 request: 00:07:34.997 { 00:07:34.997 "method": "env_dpdk_get_mem_stats", 00:07:34.997 "req_id": 1 00:07:34.997 } 00:07:34.997 Got JSON-RPC error response 00:07:34.997 response: 00:07:34.997 { 00:07:34.997 "code": -32601, 00:07:34.997 "message": "Method not found" 00:07:34.997 } 00:07:34.997 22:58:27 app_cmdline -- common/autotest_common.sh@652 -- # es=1 00:07:34.997 22:58:27 app_cmdline -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:07:34.997 22:58:27 app_cmdline -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:07:34.997 22:58:27 app_cmdline -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:07:34.997 22:58:27 app_cmdline -- app/cmdline.sh@1 -- # killprocess 4148248 00:07:34.997 22:58:27 app_cmdline -- common/autotest_common.sh@949 -- # '[' -z 4148248 ']' 00:07:34.997 22:58:27 app_cmdline -- common/autotest_common.sh@953 -- # kill -0 4148248 00:07:34.997 22:58:27 app_cmdline -- common/autotest_common.sh@954 -- # uname 00:07:34.997 22:58:27 app_cmdline -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:34.997 22:58:27 app_cmdline -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4148248 00:07:35.256 22:58:27 app_cmdline -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:35.256 22:58:27 app_cmdline -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:35.256 22:58:27 app_cmdline -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4148248' 00:07:35.256 killing process with pid 4148248 00:07:35.256 22:58:27 app_cmdline -- common/autotest_common.sh@968 -- # kill 4148248 00:07:35.256 22:58:27 app_cmdline -- common/autotest_common.sh@973 -- # wait 4148248 00:07:35.515 00:07:35.515 real 0m1.955s 00:07:35.515 user 0m2.361s 00:07:35.515 sys 0m0.590s 00:07:35.515 22:58:27 app_cmdline -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:35.515 22:58:27 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:35.515 ************************************ 00:07:35.515 END TEST app_cmdline 00:07:35.515 ************************************ 00:07:35.515 22:58:27 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:07:35.515 22:58:27 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:35.515 22:58:27 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:35.515 22:58:27 -- common/autotest_common.sh@10 -- # set +x 00:07:35.515 ************************************ 00:07:35.515 START TEST version 00:07:35.515 ************************************ 00:07:35.515 22:58:27 version -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:07:35.515 * Looking for test storage... 00:07:35.515 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:07:35.515 22:58:27 version -- app/version.sh@17 -- # get_header_version major 00:07:35.515 22:58:27 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:07:35.515 22:58:27 version -- app/version.sh@14 -- # cut -f2 00:07:35.515 22:58:27 version -- app/version.sh@14 -- # tr -d '"' 00:07:35.774 22:58:27 version -- app/version.sh@17 -- # major=24 00:07:35.774 22:58:27 version -- app/version.sh@18 -- # get_header_version minor 00:07:35.774 22:58:27 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:07:35.774 22:58:27 version -- app/version.sh@14 -- # cut -f2 00:07:35.774 22:58:27 version -- app/version.sh@14 -- # tr -d '"' 00:07:35.774 22:58:27 version -- app/version.sh@18 -- # minor=9 00:07:35.774 22:58:27 version -- app/version.sh@19 -- # get_header_version patch 00:07:35.774 22:58:27 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:07:35.774 22:58:27 version -- app/version.sh@14 -- # cut -f2 00:07:35.774 22:58:27 version -- app/version.sh@14 -- # tr -d '"' 00:07:35.774 22:58:27 version -- app/version.sh@19 -- # patch=0 00:07:35.774 22:58:27 version -- app/version.sh@20 -- # get_header_version suffix 00:07:35.774 22:58:27 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:07:35.774 22:58:27 version -- app/version.sh@14 -- # cut -f2 00:07:35.774 22:58:27 version -- app/version.sh@14 -- # tr -d '"' 00:07:35.774 22:58:27 version -- app/version.sh@20 -- # suffix=-pre 00:07:35.774 22:58:27 version -- app/version.sh@22 -- # version=24.9 00:07:35.774 22:58:27 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:35.774 22:58:27 version -- app/version.sh@28 -- # version=24.9rc0 00:07:35.774 22:58:27 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:07:35.774 22:58:27 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:35.774 22:58:27 version -- app/version.sh@30 -- # py_version=24.9rc0 00:07:35.774 22:58:27 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:07:35.774 00:07:35.774 real 0m0.189s 00:07:35.774 user 0m0.101s 00:07:35.774 sys 0m0.134s 00:07:35.775 22:58:27 version -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:35.775 22:58:27 version -- common/autotest_common.sh@10 -- # set +x 00:07:35.775 ************************************ 00:07:35.775 END TEST version 00:07:35.775 ************************************ 00:07:35.775 22:58:27 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:07:35.775 22:58:27 -- spdk/autotest.sh@198 -- # uname -s 00:07:35.775 22:58:27 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:07:35.775 22:58:27 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:35.775 22:58:27 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:35.775 22:58:27 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:07:35.775 22:58:27 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:35.775 22:58:27 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:35.775 22:58:27 -- common/autotest_common.sh@729 -- # xtrace_disable 00:07:35.775 22:58:27 -- common/autotest_common.sh@10 -- # set +x 00:07:35.775 22:58:27 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:35.775 22:58:27 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:35.775 22:58:27 -- spdk/autotest.sh@279 -- # '[' 0 -eq 1 ']' 00:07:35.775 22:58:27 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:07:35.775 22:58:27 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:07:35.775 22:58:27 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:07:35.775 22:58:27 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:07:35.775 22:58:27 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:07:35.775 22:58:27 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:07:35.775 22:58:27 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:07:35.775 22:58:27 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:07:35.775 22:58:27 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:07:35.775 22:58:27 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:07:35.775 22:58:27 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:07:35.775 22:58:27 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:07:35.775 22:58:27 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:07:35.775 22:58:27 -- spdk/autotest.sh@371 -- # [[ 1 -eq 1 ]] 00:07:35.775 22:58:27 -- spdk/autotest.sh@372 -- # run_test llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:07:35.775 22:58:27 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:35.775 22:58:27 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:35.775 22:58:27 -- common/autotest_common.sh@10 -- # set +x 00:07:35.775 ************************************ 00:07:35.775 START TEST llvm_fuzz 00:07:35.775 ************************************ 00:07:35.775 22:58:28 llvm_fuzz -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:07:36.034 * Looking for test storage... 00:07:36.034 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz 00:07:36.034 22:58:28 llvm_fuzz -- fuzz/llvm.sh@11 -- # fuzzers=($(get_fuzzer_targets)) 00:07:36.034 22:58:28 llvm_fuzz -- fuzz/llvm.sh@11 -- # get_fuzzer_targets 00:07:36.034 22:58:28 llvm_fuzz -- common/autotest_common.sh@547 -- # fuzzers=() 00:07:36.034 22:58:28 llvm_fuzz -- common/autotest_common.sh@547 -- # local fuzzers 00:07:36.034 22:58:28 llvm_fuzz -- common/autotest_common.sh@549 -- # [[ -n '' ]] 00:07:36.034 22:58:28 llvm_fuzz -- common/autotest_common.sh@552 -- # fuzzers=("$rootdir/test/fuzz/llvm/"*) 00:07:36.034 22:58:28 llvm_fuzz -- common/autotest_common.sh@553 -- # fuzzers=("${fuzzers[@]##*/}") 00:07:36.034 22:58:28 llvm_fuzz -- common/autotest_common.sh@556 -- # echo 'common.sh llvm-gcov.sh nvmf vfio' 00:07:36.034 22:58:28 llvm_fuzz -- fuzz/llvm.sh@13 -- # llvm_out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm 00:07:36.035 22:58:28 llvm_fuzz -- fuzz/llvm.sh@15 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/coverage 00:07:36.035 22:58:28 llvm_fuzz -- fuzz/llvm.sh@56 -- # [[ 1 -eq 0 ]] 00:07:36.035 22:58:28 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:07:36.035 22:58:28 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:07:36.035 22:58:28 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:07:36.035 22:58:28 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:07:36.035 22:58:28 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:07:36.035 22:58:28 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:07:36.035 22:58:28 llvm_fuzz -- fuzz/llvm.sh@62 -- # run_test nvmf_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:07:36.035 22:58:28 llvm_fuzz -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:36.035 22:58:28 llvm_fuzz -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:36.035 22:58:28 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:07:36.035 ************************************ 00:07:36.035 START TEST nvmf_fuzz 00:07:36.035 ************************************ 00:07:36.035 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:07:36.035 * Looking for test storage... 00:07:36.035 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:07:36.035 22:58:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@60 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:07:36.035 22:58:28 llvm_fuzz.nvmf_fuzz -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:07:36.035 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:36.035 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@34 -- # set -e 00:07:36.035 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:36.035 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:36.035 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:07:36.035 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:07:36.035 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:36.035 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:07:36.035 22:58:28 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:36.035 22:58:28 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:36.035 22:58:28 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:36.035 22:58:28 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:36.035 22:58:28 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:36.035 22:58:28 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:36.035 22:58:28 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:36.035 22:58:28 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:36.035 22:58:28 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:36.035 22:58:28 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:36.035 22:58:28 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:36.035 22:58:28 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:36.035 22:58:28 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:36.035 22:58:28 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:36.035 22:58:28 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:36.035 22:58:28 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:36.035 22:58:28 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:36.035 22:58:28 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:36.035 22:58:28 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:07:36.035 22:58:28 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:36.035 22:58:28 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:36.035 22:58:28 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:36.035 22:58:28 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:36.035 22:58:28 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:36.035 22:58:28 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:36.035 22:58:28 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:36.035 22:58:28 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:36.035 22:58:28 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:36.035 22:58:28 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:36.035 22:58:28 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:36.035 22:58:28 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:36.035 22:58:28 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:36.035 22:58:28 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:36.035 22:58:28 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB=/usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:07:36.035 22:58:28 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@35 -- # CONFIG_FUZZER=y 00:07:36.035 22:58:28 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:07:36.035 22:58:28 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:36.035 22:58:28 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:36.035 22:58:28 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:36.035 22:58:28 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:36.035 22:58:28 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:07:36.035 22:58:28 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:36.035 22:58:28 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:36.035 22:58:28 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:36.035 22:58:28 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:36.035 22:58:28 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:36.035 22:58:28 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:36.035 22:58:28 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:36.035 22:58:28 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:36.035 22:58:28 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:36.035 22:58:28 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:36.035 22:58:28 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:07:36.035 22:58:28 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:36.035 22:58:28 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:36.035 22:58:28 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:36.035 22:58:28 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:36.035 22:58:28 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:36.035 22:58:28 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:36.035 22:58:28 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:36.035 22:58:28 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:07:36.035 22:58:28 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:36.035 22:58:28 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:07:36.035 22:58:28 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:07:36.035 22:58:28 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:36.035 22:58:28 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:36.035 22:58:28 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@66 -- # CONFIG_SHARED=n 00:07:36.035 22:58:28 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:07:36.035 22:58:28 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:36.035 22:58:28 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:36.035 22:58:28 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:36.035 22:58:28 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:07:36.035 22:58:28 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:36.035 22:58:28 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:36.035 22:58:28 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:36.035 22:58:28 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:36.035 22:58:28 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:36.035 22:58:28 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:07:36.035 22:58:28 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:36.035 22:58:28 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:36.035 22:58:28 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:36.035 22:58:28 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:36.035 22:58:28 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:36.035 22:58:28 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:36.035 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:07:36.035 22:58:28 llvm_fuzz.nvmf_fuzz -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:07:36.035 22:58:28 llvm_fuzz.nvmf_fuzz -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:07:36.035 22:58:28 llvm_fuzz.nvmf_fuzz -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:07:36.035 22:58:28 llvm_fuzz.nvmf_fuzz -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:36.035 22:58:28 llvm_fuzz.nvmf_fuzz -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:36.035 22:58:28 llvm_fuzz.nvmf_fuzz -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:07:36.035 22:58:28 llvm_fuzz.nvmf_fuzz -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:36.036 22:58:28 llvm_fuzz.nvmf_fuzz -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:36.036 22:58:28 llvm_fuzz.nvmf_fuzz -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:36.036 22:58:28 llvm_fuzz.nvmf_fuzz -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:36.036 22:58:28 llvm_fuzz.nvmf_fuzz -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:36.036 22:58:28 llvm_fuzz.nvmf_fuzz -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:36.036 22:58:28 llvm_fuzz.nvmf_fuzz -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:36.036 22:58:28 llvm_fuzz.nvmf_fuzz -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:07:36.036 22:58:28 llvm_fuzz.nvmf_fuzz -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:36.036 #define SPDK_CONFIG_H 00:07:36.036 #define SPDK_CONFIG_APPS 1 00:07:36.036 #define SPDK_CONFIG_ARCH native 00:07:36.036 #undef SPDK_CONFIG_ASAN 00:07:36.036 #undef SPDK_CONFIG_AVAHI 00:07:36.036 #undef SPDK_CONFIG_CET 00:07:36.036 #define SPDK_CONFIG_COVERAGE 1 00:07:36.036 #define SPDK_CONFIG_CROSS_PREFIX 00:07:36.036 #undef SPDK_CONFIG_CRYPTO 00:07:36.036 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:36.036 #undef SPDK_CONFIG_CUSTOMOCF 00:07:36.036 #undef SPDK_CONFIG_DAOS 00:07:36.036 #define SPDK_CONFIG_DAOS_DIR 00:07:36.036 #define SPDK_CONFIG_DEBUG 1 00:07:36.036 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:36.036 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:07:36.036 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:36.036 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:36.036 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:36.036 #undef SPDK_CONFIG_DPDK_UADK 00:07:36.036 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:07:36.036 #define SPDK_CONFIG_EXAMPLES 1 00:07:36.036 #undef SPDK_CONFIG_FC 00:07:36.036 #define SPDK_CONFIG_FC_PATH 00:07:36.036 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:36.036 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:36.036 #undef SPDK_CONFIG_FUSE 00:07:36.036 #define SPDK_CONFIG_FUZZER 1 00:07:36.036 #define SPDK_CONFIG_FUZZER_LIB /usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:07:36.036 #undef SPDK_CONFIG_GOLANG 00:07:36.036 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:36.036 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:36.036 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:36.036 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:36.036 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:36.036 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:36.036 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:36.036 #define SPDK_CONFIG_IDXD 1 00:07:36.036 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:36.036 #undef SPDK_CONFIG_IPSEC_MB 00:07:36.036 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:36.036 #define SPDK_CONFIG_ISAL 1 00:07:36.036 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:36.036 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:36.036 #define SPDK_CONFIG_LIBDIR 00:07:36.036 #undef SPDK_CONFIG_LTO 00:07:36.036 #define SPDK_CONFIG_MAX_LCORES 00:07:36.036 #define SPDK_CONFIG_NVME_CUSE 1 00:07:36.036 #undef SPDK_CONFIG_OCF 00:07:36.036 #define SPDK_CONFIG_OCF_PATH 00:07:36.036 #define SPDK_CONFIG_OPENSSL_PATH 00:07:36.036 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:36.036 #define SPDK_CONFIG_PGO_DIR 00:07:36.036 #undef SPDK_CONFIG_PGO_USE 00:07:36.036 #define SPDK_CONFIG_PREFIX /usr/local 00:07:36.036 #undef SPDK_CONFIG_RAID5F 00:07:36.036 #undef SPDK_CONFIG_RBD 00:07:36.036 #define SPDK_CONFIG_RDMA 1 00:07:36.036 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:36.036 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:36.036 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:36.036 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:36.036 #undef SPDK_CONFIG_SHARED 00:07:36.036 #undef SPDK_CONFIG_SMA 00:07:36.036 #define SPDK_CONFIG_TESTS 1 00:07:36.036 #undef SPDK_CONFIG_TSAN 00:07:36.036 #define SPDK_CONFIG_UBLK 1 00:07:36.036 #define SPDK_CONFIG_UBSAN 1 00:07:36.036 #undef SPDK_CONFIG_UNIT_TESTS 00:07:36.036 #undef SPDK_CONFIG_URING 00:07:36.036 #define SPDK_CONFIG_URING_PATH 00:07:36.036 #undef SPDK_CONFIG_URING_ZNS 00:07:36.036 #undef SPDK_CONFIG_USDT 00:07:36.036 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:36.036 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:36.036 #define SPDK_CONFIG_VFIO_USER 1 00:07:36.036 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:36.036 #define SPDK_CONFIG_VHOST 1 00:07:36.036 #define SPDK_CONFIG_VIRTIO 1 00:07:36.036 #undef SPDK_CONFIG_VTUNE 00:07:36.036 #define SPDK_CONFIG_VTUNE_DIR 00:07:36.036 #define SPDK_CONFIG_WERROR 1 00:07:36.036 #define SPDK_CONFIG_WPDK_DIR 00:07:36.036 #undef SPDK_CONFIG_XNVME 00:07:36.036 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:36.036 22:58:28 llvm_fuzz.nvmf_fuzz -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:36.036 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:07:36.036 22:58:28 llvm_fuzz.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:36.036 22:58:28 llvm_fuzz.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:36.036 22:58:28 llvm_fuzz.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:36.036 22:58:28 llvm_fuzz.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.036 22:58:28 llvm_fuzz.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.036 22:58:28 llvm_fuzz.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.036 22:58:28 llvm_fuzz.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:07:36.036 22:58:28 llvm_fuzz.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.036 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:07:36.036 22:58:28 llvm_fuzz.nvmf_fuzz -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:07:36.036 22:58:28 llvm_fuzz.nvmf_fuzz -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:07:36.036 22:58:28 llvm_fuzz.nvmf_fuzz -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:07:36.296 22:58:28 llvm_fuzz.nvmf_fuzz -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:36.296 22:58:28 llvm_fuzz.nvmf_fuzz -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:36.296 22:58:28 llvm_fuzz.nvmf_fuzz -- pm/common@64 -- # TEST_TAG=N/A 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- pm/common@68 -- # uname -s 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- pm/common@68 -- # PM_OS=Linux 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- pm/common@76 -- # SUDO[0]= 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power ]] 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@58 -- # : 0 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@62 -- # : 0 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@64 -- # : 0 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@66 -- # : 1 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@68 -- # : 0 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@70 -- # : 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@72 -- # : 0 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@74 -- # : 0 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@76 -- # : 0 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@78 -- # : 0 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@80 -- # : 0 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@82 -- # : 0 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@84 -- # : 0 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@86 -- # : 0 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@88 -- # : 0 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@90 -- # : 0 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@92 -- # : 0 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@94 -- # : 0 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@96 -- # : 0 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@98 -- # : 1 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@100 -- # : 1 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@102 -- # : rdma 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@104 -- # : 0 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@106 -- # : 0 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@108 -- # : 0 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@110 -- # : 0 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@112 -- # : 0 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@114 -- # : 0 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@116 -- # : 0 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@118 -- # : 0 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@120 -- # : 0 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@122 -- # : 1 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@124 -- # : 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@126 -- # : 0 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@128 -- # : 0 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@130 -- # : 0 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@132 -- # : 0 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@134 -- # : 0 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@136 -- # : 0 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@138 -- # : 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@140 -- # : true 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@142 -- # : 0 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@144 -- # : 0 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@146 -- # : 0 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@148 -- # : 0 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@150 -- # : 0 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@152 -- # : 0 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@154 -- # : 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@156 -- # : 0 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@158 -- # : 0 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@160 -- # : 0 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@162 -- # : 0 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:07:36.297 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@164 -- # : 0 00:07:36.298 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:07:36.298 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@167 -- # : 00:07:36.298 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:07:36.298 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@169 -- # : 0 00:07:36.298 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:07:36.298 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@171 -- # : 0 00:07:36.298 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:36.298 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:07:36.298 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:07:36.298 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:07:36.298 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:07:36.298 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:36.298 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:36.298 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:36.298 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:36.298 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:36.298 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:36.298 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:07:36.298 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:07:36.298 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:36.298 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:07:36.298 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:36.298 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:36.298 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:36.298 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:36.298 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:36.298 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:07:36.298 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@200 -- # cat 00:07:36.298 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:07:36.298 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:36.298 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:36.298 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:36.298 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:36.298 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:07:36.298 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:07:36.298 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:36.298 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:36.298 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:36.298 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:36.298 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:36.298 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:36.298 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:36.298 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:36.298 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:36.298 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:36.298 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:36.298 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:36.298 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:07:36.298 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@263 -- # export valgrind= 00:07:36.298 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@263 -- # valgrind= 00:07:36.298 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@269 -- # uname -s 00:07:36.298 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:07:36.298 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:07:36.298 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:07:36.298 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:07:36.298 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:36.298 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:36.298 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@279 -- # MAKE=make 00:07:36.298 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j112 00:07:36.298 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:07:36.298 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:07:36.298 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:07:36.298 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@299 -- # TEST_MODE= 00:07:36.298 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@318 -- # [[ -z 4148866 ]] 00:07:36.298 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@318 -- # kill -0 4148866 00:07:36.298 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1679 -- # set_test_storage 2147483648 00:07:36.298 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:07:36.298 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:07:36.298 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@331 -- # local mount target_dir 00:07:36.298 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:07:36.298 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:07:36.298 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:07:36.298 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:07:36.298 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.7O47e7 00:07:36.298 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:36.298 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:07:36.298 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:07:36.298 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf /tmp/spdk.7O47e7/tests/nvmf /tmp/spdk.7O47e7 00:07:36.298 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:07:36.298 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:36.298 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@327 -- # df -T 00:07:36.298 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:07:36.298 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:07:36.298 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:07:36.298 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:07:36.298 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:07:36.298 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:07:36.298 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:36.298 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:07:36.299 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:07:36.299 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=956952576 00:07:36.299 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:07:36.299 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=4327477248 00:07:36.299 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:36.299 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:07:36.299 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:07:36.299 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=49268846592 00:07:36.299 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=61742280704 00:07:36.299 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=12473434112 00:07:36.299 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:36.299 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:36.299 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:36.299 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=30866427904 00:07:36.299 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=30871138304 00:07:36.299 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=4710400 00:07:36.299 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:36.299 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:36.299 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:36.299 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=12342145024 00:07:36.299 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=12348456960 00:07:36.299 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=6311936 00:07:36.299 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:36.299 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:36.299 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:36.299 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=30869696512 00:07:36.299 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=30871142400 00:07:36.299 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=1445888 00:07:36.299 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:36.299 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:36.299 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:36.299 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=6174220288 00:07:36.299 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=6174224384 00:07:36.299 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:07:36.299 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:36.299 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:07:36.299 * Looking for test storage... 00:07:36.299 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@368 -- # local target_space new_size 00:07:36.299 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:07:36.299 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:07:36.299 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:36.299 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@372 -- # mount=/ 00:07:36.299 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@374 -- # target_space=49268846592 00:07:36.299 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:07:36.299 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:07:36.299 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:07:36.299 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:07:36.299 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:07:36.299 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@381 -- # new_size=14688026624 00:07:36.299 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:36.299 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:07:36.299 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:07:36.299 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:07:36.299 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:07:36.299 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@389 -- # return 0 00:07:36.299 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1681 -- # set -o errtrace 00:07:36.299 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1682 -- # shopt -s extdebug 00:07:36.299 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1683 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:36.299 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1685 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:36.299 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1686 -- # true 00:07:36.299 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1688 -- # xtrace_fd 00:07:36.299 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:36.299 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:36.299 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@27 -- # exec 00:07:36.299 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@29 -- # exec 00:07:36.299 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:36.299 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:36.299 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:36.299 22:58:28 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@18 -- # set -x 00:07:36.299 22:58:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@61 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/../common.sh 00:07:36.299 22:58:28 llvm_fuzz.nvmf_fuzz -- ../common.sh@8 -- # pids=() 00:07:36.299 22:58:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@63 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:07:36.299 22:58:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@64 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:07:36.299 22:58:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@64 -- # fuzz_num=25 00:07:36.299 22:58:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@65 -- # (( fuzz_num != 0 )) 00:07:36.299 22:58:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@67 -- # trap 'cleanup /tmp/llvm_fuzz* /var/tmp/suppress_nvmf_fuzz; exit 1' SIGINT SIGTERM EXIT 00:07:36.299 22:58:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@69 -- # mem_size=512 00:07:36.299 22:58:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@70 -- # [[ 1 -eq 1 ]] 00:07:36.299 22:58:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@71 -- # start_llvm_fuzz_short 25 1 00:07:36.299 22:58:28 llvm_fuzz.nvmf_fuzz -- ../common.sh@69 -- # local fuzz_num=25 00:07:36.299 22:58:28 llvm_fuzz.nvmf_fuzz -- ../common.sh@70 -- # local time=1 00:07:36.299 22:58:28 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i = 0 )) 00:07:36.299 22:58:28 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:36.299 22:58:28 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:07:36.299 22:58:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=0 00:07:36.299 22:58:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:36.299 22:58:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:36.299 22:58:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:07:36.299 22:58:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_0.conf 00:07:36.299 22:58:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:36.299 22:58:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:36.299 22:58:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 0 00:07:36.299 22:58:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4400 00:07:36.299 22:58:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:07:36.299 22:58:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' 00:07:36.299 22:58:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4400"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:36.299 22:58:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:36.299 22:58:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:36.299 22:58:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' -c /tmp/fuzz_json_0.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 -Z 0 00:07:36.299 [2024-06-07 22:58:28.481300] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:07:36.299 [2024-06-07 22:58:28.481375] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4148972 ] 00:07:36.299 EAL: No free 2048 kB hugepages reported on node 1 00:07:36.559 [2024-06-07 22:58:28.793891] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.817 [2024-06-07 22:58:28.899866] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.817 [2024-06-07 22:58:28.962853] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:36.818 [2024-06-07 22:58:28.979229] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4400 *** 00:07:36.818 INFO: Running with entropic power schedule (0xFF, 100). 00:07:36.818 INFO: Seed: 2133355039 00:07:36.818 INFO: Loaded 1 modules (357552 inline 8-bit counters): 357552 [0x29a2fcc, 0x29fa47c), 00:07:36.818 INFO: Loaded 1 PC tables (357552 PCs): 357552 [0x29fa480,0x2f6ef80), 00:07:36.818 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:07:36.818 INFO: A corpus is not provided, starting from an empty corpus 00:07:36.818 #2 INITED exec/s: 0 rss: 63Mb 00:07:36.818 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:36.818 This may also happen if the target rejected all inputs we tried so far 00:07:36.818 [2024-06-07 22:58:29.044614] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:36.818 [2024-06-07 22:58:29.044651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.385 NEW_FUNC[1/686]: 0x482e80 in fuzz_admin_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:47 00:07:37.385 NEW_FUNC[2/686]: 0x4bf580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:37.385 #39 NEW cov: 11809 ft: 11810 corp: 2/112b lim: 320 exec/s: 0 rss: 70Mb L: 111/111 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:07:37.385 [2024-06-07 22:58:29.495869] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:37.385 [2024-06-07 22:58:29.495919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.385 #41 NEW cov: 11939 ft: 12514 corp: 3/206b lim: 320 exec/s: 0 rss: 70Mb L: 94/111 MS: 2 ChangeBit-InsertRepeatedBytes- 00:07:37.385 [2024-06-07 22:58:29.545791] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:37.385 [2024-06-07 22:58:29.545823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.385 #47 NEW cov: 11945 ft: 12750 corp: 4/300b lim: 320 exec/s: 0 rss: 70Mb L: 94/111 MS: 1 CMP- DE: "\377\203"- 00:07:37.385 [2024-06-07 22:58:29.606039] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (3a) qid:0 cid:4 nsid:4040404 cdw10:04040404 cdw11:04040404 SGL TRANSPORT DATA BLOCK TRANSPORT 0x404040404040404 00:07:37.385 [2024-06-07 22:58:29.606072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.385 #55 NEW cov: 12049 ft: 12950 corp: 5/404b lim: 320 exec/s: 0 rss: 70Mb L: 104/111 MS: 3 InsertByte-ChangeBinInt-InsertRepeatedBytes- 00:07:37.385 [2024-06-07 22:58:29.656124] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:37.385 [2024-06-07 22:58:29.656160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.644 #56 NEW cov: 12049 ft: 13048 corp: 6/498b lim: 320 exec/s: 0 rss: 70Mb L: 94/111 MS: 1 ChangeBit- 00:07:37.644 [2024-06-07 22:58:29.716294] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:37.644 [2024-06-07 22:58:29.716326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.644 #57 NEW cov: 12049 ft: 13135 corp: 7/600b lim: 320 exec/s: 0 rss: 70Mb L: 102/111 MS: 1 CMP- DE: "\377\377\377\377\377\377\377\016"- 00:07:37.644 [2024-06-07 22:58:29.766525] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:fcffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:37.644 [2024-06-07 22:58:29.766557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.644 #63 NEW cov: 12049 ft: 13205 corp: 8/694b lim: 320 exec/s: 0 rss: 70Mb L: 94/111 MS: 1 ChangeBinInt- 00:07:37.644 [2024-06-07 22:58:29.806538] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:37.644 [2024-06-07 22:58:29.806570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.644 #64 NEW cov: 12049 ft: 13225 corp: 9/790b lim: 320 exec/s: 0 rss: 70Mb L: 96/111 MS: 1 PersAutoDict- DE: "\377\203"- 00:07:37.644 [2024-06-07 22:58:29.846636] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:37.644 [2024-06-07 22:58:29.846668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.644 #65 NEW cov: 12049 ft: 13253 corp: 10/892b lim: 320 exec/s: 0 rss: 70Mb L: 102/111 MS: 1 CopyPart- 00:07:37.644 [2024-06-07 22:58:29.906866] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:37.644 [2024-06-07 22:58:29.906899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.903 NEW_FUNC[1/1]: 0x1a74020 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:601 00:07:37.903 #66 NEW cov: 12072 ft: 13324 corp: 11/994b lim: 320 exec/s: 0 rss: 71Mb L: 102/111 MS: 1 CopyPart- 00:07:37.903 [2024-06-07 22:58:29.957002] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:fcffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:37.903 [2024-06-07 22:58:29.957034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.903 #67 NEW cov: 12072 ft: 13380 corp: 12/1088b lim: 320 exec/s: 0 rss: 71Mb L: 94/111 MS: 1 ChangeBit- 00:07:37.903 [2024-06-07 22:58:30.017279] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (2a) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:37.903 [2024-06-07 22:58:30.017313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.903 #68 NEW cov: 12072 ft: 13408 corp: 13/1182b lim: 320 exec/s: 68 rss: 71Mb L: 94/111 MS: 1 ChangeByte- 00:07:37.903 [2024-06-07 22:58:30.057312] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff 00:07:37.903 [2024-06-07 22:58:30.057344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.903 #69 NEW cov: 12074 ft: 13441 corp: 14/1276b lim: 320 exec/s: 69 rss: 71Mb L: 94/111 MS: 1 ChangeBinInt- 00:07:37.903 [2024-06-07 22:58:30.097559] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:37.903 [2024-06-07 22:58:30.097598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.903 [2024-06-07 22:58:30.097667] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:37.903 [2024-06-07 22:58:30.097685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.903 #70 NEW cov: 12074 ft: 13704 corp: 15/1464b lim: 320 exec/s: 70 rss: 71Mb L: 188/188 MS: 1 CrossOver- 00:07:37.903 [2024-06-07 22:58:30.147520] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffff83ff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:37.903 [2024-06-07 22:58:30.147551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.162 #71 NEW cov: 12074 ft: 13755 corp: 16/1572b lim: 320 exec/s: 71 rss: 71Mb L: 108/188 MS: 1 InsertRepeatedBytes- 00:07:38.162 [2024-06-07 22:58:30.207717] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffff83ff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:38.162 [2024-06-07 22:58:30.207750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.162 #72 NEW cov: 12074 ft: 13791 corp: 17/1680b lim: 320 exec/s: 72 rss: 71Mb L: 108/188 MS: 1 ShuffleBytes- 00:07:38.162 [2024-06-07 22:58:30.268186] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:38.162 [2024-06-07 22:58:30.268217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.162 [2024-06-07 22:58:30.268280] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:38.162 [2024-06-07 22:58:30.268298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.162 [2024-06-07 22:58:30.268360] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff 00:07:38.162 [2024-06-07 22:58:30.268378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:38.162 #73 NEW cov: 12074 ft: 14020 corp: 18/1872b lim: 320 exec/s: 73 rss: 71Mb L: 192/192 MS: 1 InsertRepeatedBytes- 00:07:38.162 [2024-06-07 22:58:30.317967] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffff83ff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:38.162 [2024-06-07 22:58:30.317998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.162 #74 NEW cov: 12074 ft: 14024 corp: 19/1980b lim: 320 exec/s: 74 rss: 71Mb L: 108/192 MS: 1 ShuffleBytes- 00:07:38.162 [2024-06-07 22:58:30.378499] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (3a) qid:0 cid:4 nsid:4040404 cdw10:04040404 cdw11:04040404 SGL TRANSPORT DATA BLOCK TRANSPORT 0x404040404040404 00:07:38.162 [2024-06-07 22:58:30.378530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.162 [2024-06-07 22:58:30.378594] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:4040404 cdw10:04040404 cdw11:04040404 00:07:38.162 [2024-06-07 22:58:30.378617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.162 [2024-06-07 22:58:30.378678] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:4040404 cdw10:04040404 cdw11:04040404 00:07:38.162 [2024-06-07 22:58:30.378695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:38.162 #75 NEW cov: 12075 ft: 14144 corp: 20/2187b lim: 320 exec/s: 75 rss: 71Mb L: 207/207 MS: 1 CopyPart- 00:07:38.421 [2024-06-07 22:58:30.448705] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (3a) qid:0 cid:4 nsid:4040404 cdw10:04040404 cdw11:04040404 SGL TRANSPORT DATA BLOCK TRANSPORT 0x404040404040404 00:07:38.421 [2024-06-07 22:58:30.448738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.421 [2024-06-07 22:58:30.448805] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:4040404 cdw10:04040404 cdw11:04040404 00:07:38.421 [2024-06-07 22:58:30.448823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.421 [2024-06-07 22:58:30.448883] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:4040404 cdw10:04040404 cdw11:04040404 00:07:38.421 [2024-06-07 22:58:30.448901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:38.421 #76 NEW cov: 12075 ft: 14158 corp: 21/2395b lim: 320 exec/s: 76 rss: 71Mb L: 208/208 MS: 1 InsertByte- 00:07:38.421 [2024-06-07 22:58:30.518570] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffff83ff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:38.421 [2024-06-07 22:58:30.518605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.421 #77 NEW cov: 12075 ft: 14183 corp: 22/2503b lim: 320 exec/s: 77 rss: 71Mb L: 108/208 MS: 1 ChangeByte- 00:07:38.421 [2024-06-07 22:58:30.578740] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:38.421 [2024-06-07 22:58:30.578772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.421 #78 NEW cov: 12075 ft: 14209 corp: 23/2597b lim: 320 exec/s: 78 rss: 72Mb L: 94/208 MS: 1 ShuffleBytes- 00:07:38.421 [2024-06-07 22:58:30.639056] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (3a) qid:0 cid:4 nsid:4040404 cdw10:04040404 cdw11:04040404 SGL TRANSPORT DATA BLOCK TRANSPORT 0x404040404040404 00:07:38.421 [2024-06-07 22:58:30.639086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.421 [2024-06-07 22:58:30.639149] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:04040404 cdw11:04040404 00:07:38.421 [2024-06-07 22:58:30.639168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.421 #79 NEW cov: 12075 ft: 14221 corp: 24/2748b lim: 320 exec/s: 79 rss: 72Mb L: 151/208 MS: 1 InsertRepeatedBytes- 00:07:38.421 [2024-06-07 22:58:30.689062] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (3a) qid:0 cid:4 nsid:4040404 cdw10:04040404 cdw11:04040404 SGL TRANSPORT DATA BLOCK TRANSPORT 0x404040404040404 00:07:38.421 [2024-06-07 22:58:30.689093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.680 #80 NEW cov: 12075 ft: 14249 corp: 25/2852b lim: 320 exec/s: 80 rss: 72Mb L: 104/208 MS: 1 ChangeBit- 00:07:38.680 [2024-06-07 22:58:30.729208] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (3a) qid:0 cid:4 nsid:4040404 cdw10:04040404 cdw11:04040404 SGL TRANSPORT DATA BLOCK TRANSPORT 0x404040404040404 00:07:38.680 [2024-06-07 22:58:30.729241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.680 #81 NEW cov: 12075 ft: 14254 corp: 26/2957b lim: 320 exec/s: 81 rss: 72Mb L: 105/208 MS: 1 InsertByte- 00:07:38.680 [2024-06-07 22:58:30.799325] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ff0effff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:38.680 [2024-06-07 22:58:30.799355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.680 #82 NEW cov: 12075 ft: 14272 corp: 27/3053b lim: 320 exec/s: 82 rss: 73Mb L: 96/208 MS: 1 PersAutoDict- DE: "\377\377\377\377\377\377\377\016"- 00:07:38.680 [2024-06-07 22:58:30.859477] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:38.680 [2024-06-07 22:58:30.859508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.680 #83 NEW cov: 12075 ft: 14342 corp: 28/3147b lim: 320 exec/s: 83 rss: 73Mb L: 94/208 MS: 1 ChangeBit- 00:07:38.680 [2024-06-07 22:58:30.899681] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (2a) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:38.680 [2024-06-07 22:58:30.899712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.680 #84 NEW cov: 12075 ft: 14346 corp: 29/3241b lim: 320 exec/s: 84 rss: 73Mb L: 94/208 MS: 1 ChangeBinInt- 00:07:38.940 [2024-06-07 22:58:30.959779] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:38.940 [2024-06-07 22:58:30.959811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.940 #85 NEW cov: 12075 ft: 14382 corp: 30/3335b lim: 320 exec/s: 85 rss: 73Mb L: 94/208 MS: 1 CrossOver- 00:07:38.940 [2024-06-07 22:58:31.020001] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:38.940 [2024-06-07 22:58:31.020033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.940 #86 NEW cov: 12075 ft: 14453 corp: 31/3430b lim: 320 exec/s: 43 rss: 73Mb L: 95/208 MS: 1 InsertByte- 00:07:38.940 #86 DONE cov: 12075 ft: 14453 corp: 31/3430b lim: 320 exec/s: 43 rss: 73Mb 00:07:38.940 ###### Recommended dictionary. ###### 00:07:38.940 "\377\203" # Uses: 1 00:07:38.940 "\377\377\377\377\377\377\377\016" # Uses: 1 00:07:38.940 ###### End of recommended dictionary. ###### 00:07:38.940 Done 86 runs in 2 second(s) 00:07:38.940 22:58:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_0.conf /var/tmp/suppress_nvmf_fuzz 00:07:38.940 22:58:31 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:38.940 22:58:31 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:38.940 22:58:31 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:07:38.940 22:58:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=1 00:07:38.940 22:58:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:38.940 22:58:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:38.940 22:58:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:07:38.940 22:58:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_1.conf 00:07:38.940 22:58:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:38.940 22:58:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:38.940 22:58:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 1 00:07:38.940 22:58:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4401 00:07:38.940 22:58:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:07:38.940 22:58:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' 00:07:38.940 22:58:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4401"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:38.940 22:58:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:38.940 22:58:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:38.940 22:58:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' -c /tmp/fuzz_json_1.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 -Z 1 00:07:39.200 [2024-06-07 22:58:31.233184] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:07:39.200 [2024-06-07 22:58:31.233256] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4149384 ] 00:07:39.200 EAL: No free 2048 kB hugepages reported on node 1 00:07:39.459 [2024-06-07 22:58:31.542332] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.459 [2024-06-07 22:58:31.641718] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.459 [2024-06-07 22:58:31.704124] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:39.459 [2024-06-07 22:58:31.720485] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4401 *** 00:07:39.459 INFO: Running with entropic power schedule (0xFF, 100). 00:07:39.459 INFO: Seed: 580379898 00:07:39.718 INFO: Loaded 1 modules (357552 inline 8-bit counters): 357552 [0x29a2fcc, 0x29fa47c), 00:07:39.718 INFO: Loaded 1 PC tables (357552 PCs): 357552 [0x29fa480,0x2f6ef80), 00:07:39.718 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:07:39.718 INFO: A corpus is not provided, starting from an empty corpus 00:07:39.718 #2 INITED exec/s: 0 rss: 64Mb 00:07:39.718 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:39.718 This may also happen if the target rejected all inputs we tried so far 00:07:39.718 [2024-06-07 22:58:31.775929] ctrlr.c:2626:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (164484) > buf size (4096) 00:07:39.718 [2024-06-07 22:58:31.776073] ctrlr.c:2626:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (164484) > buf size (4096) 00:07:39.718 [2024-06-07 22:58:31.776210] ctrlr.c:2626:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (164484) > buf size (4096) 00:07:39.718 [2024-06-07 22:58:31.776341] ctrlr.c:2626:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (164484) > buf size (4096) 00:07:39.718 [2024-06-07 22:58:31.776607] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:a0a000a0 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.718 [2024-06-07 22:58:31.776645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.718 [2024-06-07 22:58:31.776718] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:a0a000a0 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.718 [2024-06-07 22:58:31.776737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.718 [2024-06-07 22:58:31.776802] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:a0a000a0 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.718 [2024-06-07 22:58:31.776820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:39.718 [2024-06-07 22:58:31.776886] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:a0a000a0 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.718 [2024-06-07 22:58:31.776908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:39.977 NEW_FUNC[1/685]: 0x483780 in fuzz_admin_get_log_page_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:67 00:07:39.977 NEW_FUNC[2/685]: 0x4bf580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:39.977 #3 NEW cov: 11891 ft: 11893 corp: 2/26b lim: 30 exec/s: 0 rss: 71Mb L: 25/25 MS: 1 InsertRepeatedBytes- 00:07:39.977 [2024-06-07 22:58:32.226987] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100004d4d 00:07:39.977 [2024-06-07 22:58:32.227140] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100004d4d 00:07:39.977 [2024-06-07 22:58:32.227404] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:9898814d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.977 [2024-06-07 22:58:32.227443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.977 [2024-06-07 22:58:32.227513] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:4d4d814d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.977 [2024-06-07 22:58:32.227532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.237 NEW_FUNC[1/1]: 0xf921e0 in rte_get_tsc_cycles /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/include/rte_cycles.h:61 00:07:40.237 #8 NEW cov: 12028 ft: 12943 corp: 3/38b lim: 30 exec/s: 0 rss: 72Mb L: 12/25 MS: 5 CopyPart-ChangeByte-ChangeBit-CopyPart-InsertRepeatedBytes- 00:07:40.237 [2024-06-07 22:58:32.286922] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100004d4d 00:07:40.237 [2024-06-07 22:58:32.287193] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:4d4d814d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.237 [2024-06-07 22:58:32.287226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.237 #9 NEW cov: 12034 ft: 13626 corp: 4/48b lim: 30 exec/s: 0 rss: 72Mb L: 10/25 MS: 1 EraseBytes- 00:07:40.237 [2024-06-07 22:58:32.357281] ctrlr.c:2626:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (164484) > buf size (4096) 00:07:40.237 [2024-06-07 22:58:32.357420] ctrlr.c:2626:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (164484) > buf size (4096) 00:07:40.237 [2024-06-07 22:58:32.357549] ctrlr.c:2626:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (42628) > buf size (4096) 00:07:40.237 [2024-06-07 22:58:32.357684] ctrlr.c:2626:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (164484) > buf size (4096) 00:07:40.237 [2024-06-07 22:58:32.357944] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:a0a000a0 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.237 [2024-06-07 22:58:32.357978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.237 [2024-06-07 22:58:32.358049] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:a0a000a0 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.237 [2024-06-07 22:58:32.358067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.237 [2024-06-07 22:58:32.358134] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:29a000a0 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.237 [2024-06-07 22:58:32.358152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.237 [2024-06-07 22:58:32.358218] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:a0a000a0 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.237 [2024-06-07 22:58:32.358240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:40.237 #10 NEW cov: 12119 ft: 13862 corp: 5/74b lim: 30 exec/s: 0 rss: 72Mb L: 26/26 MS: 1 InsertByte- 00:07:40.237 [2024-06-07 22:58:32.427470] ctrlr.c:2626:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (164484) > buf size (4096) 00:07:40.237 [2024-06-07 22:58:32.427615] ctrlr.c:2626:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (164484) > buf size (4096) 00:07:40.237 [2024-06-07 22:58:32.427750] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xa03f 00:07:40.237 [2024-06-07 22:58:32.427881] ctrlr.c:2626:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (164484) > buf size (4096) 00:07:40.237 [2024-06-07 22:58:32.428146] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:a0a000a0 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.237 [2024-06-07 22:58:32.428179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.237 [2024-06-07 22:58:32.428249] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:a0a000a0 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.237 [2024-06-07 22:58:32.428268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.237 [2024-06-07 22:58:32.428333] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:a0a000a0 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.237 [2024-06-07 22:58:32.428351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.237 [2024-06-07 22:58:32.428415] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:a0a000a0 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.237 [2024-06-07 22:58:32.428433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:40.237 #11 NEW cov: 12119 ft: 14034 corp: 6/100b lim: 30 exec/s: 0 rss: 72Mb L: 26/26 MS: 1 InsertByte- 00:07:40.237 [2024-06-07 22:58:32.477643] ctrlr.c:2626:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (164484) > buf size (4096) 00:07:40.237 [2024-06-07 22:58:32.477787] ctrlr.c:2626:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (164484) > buf size (4096) 00:07:40.237 [2024-06-07 22:58:32.477919] ctrlr.c:2626:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (950916) > buf size (4096) 00:07:40.237 [2024-06-07 22:58:32.478047] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xa00a 00:07:40.237 [2024-06-07 22:58:32.478311] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:a0a000a0 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.237 [2024-06-07 22:58:32.478345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.237 [2024-06-07 22:58:32.478416] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:a0a000a0 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.237 [2024-06-07 22:58:32.478435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.237 [2024-06-07 22:58:32.478500] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:a0a083a0 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.237 [2024-06-07 22:58:32.478518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.237 [2024-06-07 22:58:32.478586] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:a0a000a0 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.237 [2024-06-07 22:58:32.478605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:40.497 #12 NEW cov: 12119 ft: 14158 corp: 7/124b lim: 30 exec/s: 0 rss: 72Mb L: 24/26 MS: 1 EraseBytes- 00:07:40.497 [2024-06-07 22:58:32.547892] ctrlr.c:2626:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (164484) > buf size (4096) 00:07:40.497 [2024-06-07 22:58:32.548034] ctrlr.c:2626:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (164484) > buf size (4096) 00:07:40.497 [2024-06-07 22:58:32.548161] ctrlr.c:2626:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (164484) > buf size (4096) 00:07:40.497 [2024-06-07 22:58:32.548287] ctrlr.c:2626:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (164484) > buf size (4096) 00:07:40.497 [2024-06-07 22:58:32.548560] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:a0a000a0 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.497 [2024-06-07 22:58:32.548600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.497 [2024-06-07 22:58:32.548670] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:a0a000a0 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.497 [2024-06-07 22:58:32.548689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.497 [2024-06-07 22:58:32.548756] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:a0a000a0 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.497 [2024-06-07 22:58:32.548774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.497 [2024-06-07 22:58:32.548837] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:a0a000a0 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.497 [2024-06-07 22:58:32.548856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:40.497 #13 NEW cov: 12119 ft: 14206 corp: 8/150b lim: 30 exec/s: 0 rss: 72Mb L: 26/26 MS: 1 ShuffleBytes- 00:07:40.497 [2024-06-07 22:58:32.597860] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100004d4d 00:07:40.497 [2024-06-07 22:58:32.597994] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100004d56 00:07:40.497 [2024-06-07 22:58:32.598249] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:9898814d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.497 [2024-06-07 22:58:32.598282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.497 [2024-06-07 22:58:32.598349] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:4d4d814d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.497 [2024-06-07 22:58:32.598368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.497 #14 NEW cov: 12119 ft: 14313 corp: 9/162b lim: 30 exec/s: 0 rss: 72Mb L: 12/26 MS: 1 ChangeByte- 00:07:40.497 [2024-06-07 22:58:32.648072] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100004d4d 00:07:40.497 [2024-06-07 22:58:32.648316] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x4d4d 00:07:40.497 [2024-06-07 22:58:32.648595] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:9898814d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.497 [2024-06-07 22:58:32.648628] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.497 [2024-06-07 22:58:32.648697] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.497 [2024-06-07 22:58:32.648715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.497 [2024-06-07 22:58:32.648781] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.497 [2024-06-07 22:58:32.648802] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.497 NEW_FUNC[1/1]: 0x1a74020 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:601 00:07:40.497 #15 NEW cov: 12159 ft: 14600 corp: 10/184b lim: 30 exec/s: 0 rss: 72Mb L: 22/26 MS: 1 InsertRepeatedBytes- 00:07:40.497 [2024-06-07 22:58:32.698223] ctrlr.c:2626:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (164484) > buf size (4096) 00:07:40.497 [2024-06-07 22:58:32.698358] ctrlr.c:2626:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (163940) > buf size (4096) 00:07:40.497 [2024-06-07 22:58:32.698492] ctrlr.c:2626:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (950916) > buf size (4096) 00:07:40.497 [2024-06-07 22:58:32.698627] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xa00a 00:07:40.498 [2024-06-07 22:58:32.698883] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:a0a000a0 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.498 [2024-06-07 22:58:32.698916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.498 [2024-06-07 22:58:32.698983] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:a0180000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.498 [2024-06-07 22:58:32.699002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.498 [2024-06-07 22:58:32.699067] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:a0a083a0 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.498 [2024-06-07 22:58:32.699085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.498 [2024-06-07 22:58:32.699150] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:a0a000a0 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.498 [2024-06-07 22:58:32.699168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:40.498 #16 NEW cov: 12159 ft: 14646 corp: 11/208b lim: 30 exec/s: 0 rss: 72Mb L: 24/26 MS: 1 ChangeBinInt- 00:07:40.498 [2024-06-07 22:58:32.768414] ctrlr.c:2626:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (164484) > buf size (4096) 00:07:40.498 [2024-06-07 22:58:32.768554] ctrlr.c:2626:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (164484) > buf size (4096) 00:07:40.498 [2024-06-07 22:58:32.768693] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xa03f 00:07:40.498 [2024-06-07 22:58:32.768827] ctrlr.c:2626:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (164484) > buf size (4096) 00:07:40.498 [2024-06-07 22:58:32.769099] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:a0a000a0 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.498 [2024-06-07 22:58:32.769132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.498 [2024-06-07 22:58:32.769204] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:a0a000a0 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.498 [2024-06-07 22:58:32.769223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.498 [2024-06-07 22:58:32.769286] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:a6a000a0 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.498 [2024-06-07 22:58:32.769302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.498 [2024-06-07 22:58:32.769370] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:a0a000a0 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.498 [2024-06-07 22:58:32.769391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:40.757 #17 NEW cov: 12159 ft: 14672 corp: 12/234b lim: 30 exec/s: 17 rss: 72Mb L: 26/26 MS: 1 ChangeBinInt- 00:07:40.757 [2024-06-07 22:58:32.818572] ctrlr.c:2626:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (24580) > buf size (4096) 00:07:40.757 [2024-06-07 22:58:32.818722] ctrlr.c:2626:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (164484) > buf size (4096) 00:07:40.757 [2024-06-07 22:58:32.818858] ctrlr.c:2626:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (950916) > buf size (4096) 00:07:40.757 [2024-06-07 22:58:32.818992] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xa00a 00:07:40.757 [2024-06-07 22:58:32.819266] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:18000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.757 [2024-06-07 22:58:32.819299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.757 [2024-06-07 22:58:32.819371] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:a0a000a0 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.757 [2024-06-07 22:58:32.819390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.757 [2024-06-07 22:58:32.819457] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:a0a083a0 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.757 [2024-06-07 22:58:32.819475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.757 [2024-06-07 22:58:32.819539] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:a0a000a0 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.757 [2024-06-07 22:58:32.819556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:40.757 #18 NEW cov: 12159 ft: 14734 corp: 13/258b lim: 30 exec/s: 18 rss: 73Mb L: 24/26 MS: 1 ChangeBinInt- 00:07:40.757 [2024-06-07 22:58:32.868693] ctrlr.c:2626:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (164484) > buf size (4096) 00:07:40.757 [2024-06-07 22:58:32.868834] ctrlr.c:2626:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (164484) > buf size (4096) 00:07:40.757 [2024-06-07 22:58:32.868966] ctrlr.c:2626:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (164484) > buf size (4096) 00:07:40.757 [2024-06-07 22:58:32.869094] ctrlr.c:2626:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (164484) > buf size (4096) 00:07:40.757 [2024-06-07 22:58:32.869357] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:a0a000a0 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.757 [2024-06-07 22:58:32.869390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.757 [2024-06-07 22:58:32.869460] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:a0a000a0 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.757 [2024-06-07 22:58:32.869479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.757 [2024-06-07 22:58:32.869543] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:a0a000a0 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.757 [2024-06-07 22:58:32.869560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.757 [2024-06-07 22:58:32.869625] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:a0a000a0 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.758 [2024-06-07 22:58:32.869643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:40.758 #19 NEW cov: 12159 ft: 14744 corp: 14/284b lim: 30 exec/s: 19 rss: 73Mb L: 26/26 MS: 1 ShuffleBytes- 00:07:40.758 [2024-06-07 22:58:32.938857] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100004d4d 00:07:40.758 [2024-06-07 22:58:32.938995] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300004d4d 00:07:40.758 [2024-06-07 22:58:32.939269] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:9898814d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.758 [2024-06-07 22:58:32.939302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.758 [2024-06-07 22:58:32.939371] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:4d4d834d cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.758 [2024-06-07 22:58:32.939390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.758 #20 NEW cov: 12159 ft: 14762 corp: 15/296b lim: 30 exec/s: 20 rss: 73Mb L: 12/26 MS: 1 ChangeByte- 00:07:40.758 [2024-06-07 22:58:32.989058] ctrlr.c:2626:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (164484) > buf size (4096) 00:07:40.758 [2024-06-07 22:58:32.989195] ctrlr.c:2626:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (164484) > buf size (4096) 00:07:40.758 [2024-06-07 22:58:32.989321] ctrlr.c:2626:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (164484) > buf size (4096) 00:07:40.758 [2024-06-07 22:58:32.989444] ctrlr.c:2626:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (164484) > buf size (4096) 00:07:40.758 [2024-06-07 22:58:32.989711] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:a0a000a0 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.758 [2024-06-07 22:58:32.989752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.758 [2024-06-07 22:58:32.989823] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:a0a000a0 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.758 [2024-06-07 22:58:32.989841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.758 [2024-06-07 22:58:32.989907] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:a0a000a0 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.758 [2024-06-07 22:58:32.989924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.758 [2024-06-07 22:58:32.989992] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:a0a000a0 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.758 [2024-06-07 22:58:32.990010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:41.018 #21 NEW cov: 12159 ft: 14814 corp: 16/322b lim: 30 exec/s: 21 rss: 73Mb L: 26/26 MS: 1 ShuffleBytes- 00:07:41.018 [2024-06-07 22:58:33.059254] ctrlr.c:2626:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (164484) > buf size (4096) 00:07:41.018 [2024-06-07 22:58:33.059389] ctrlr.c:2626:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (163940) > buf size (4096) 00:07:41.018 [2024-06-07 22:58:33.059517] ctrlr.c:2626:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (950916) > buf size (4096) 00:07:41.018 [2024-06-07 22:58:33.059648] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xa02f 00:07:41.018 [2024-06-07 22:58:33.059899] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:a0a000a0 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.018 [2024-06-07 22:58:33.059932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.018 [2024-06-07 22:58:33.059999] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:a0180000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.018 [2024-06-07 22:58:33.060017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.018 [2024-06-07 22:58:33.060084] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:a0a083a0 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.018 [2024-06-07 22:58:33.060103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:41.018 [2024-06-07 22:58:33.060166] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:a0a000a0 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.018 [2024-06-07 22:58:33.060182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:41.018 #22 NEW cov: 12159 ft: 14845 corp: 17/346b lim: 30 exec/s: 22 rss: 73Mb L: 24/26 MS: 1 ChangeByte- 00:07:41.018 [2024-06-07 22:58:33.129488] ctrlr.c:2626:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (164484) > buf size (4096) 00:07:41.018 [2024-06-07 22:58:33.129601] ctrlr.c:2626:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (164484) > buf size (4096) 00:07:41.018 [2024-06-07 22:58:33.129665] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xa03f 00:07:41.018 [2024-06-07 22:58:33.129791] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xffff 00:07:41.018 [2024-06-07 22:58:33.129913] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xa00a 00:07:41.018 [2024-06-07 22:58:33.130178] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:a0a000a0 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.018 [2024-06-07 22:58:33.130211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.018 [2024-06-07 22:58:33.130280] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:a0a000a0 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.018 [2024-06-07 22:58:33.130299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.018 [2024-06-07 22:58:33.130363] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:a6a000a0 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.018 [2024-06-07 22:58:33.130381] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:41.018 [2024-06-07 22:58:33.130443] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:a0a000a0 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.018 [2024-06-07 22:58:33.130461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:41.018 [2024-06-07 22:58:33.130523] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:ffff00a0 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.018 [2024-06-07 22:58:33.130541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:41.018 #23 NEW cov: 12159 ft: 14935 corp: 18/376b lim: 30 exec/s: 23 rss: 73Mb L: 30/30 MS: 1 InsertRepeatedBytes- 00:07:41.018 [2024-06-07 22:58:33.199541] ctrlr.c:2626:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (164484) > buf size (4096) 00:07:41.018 [2024-06-07 22:58:33.199689] ctrlr.c:2626:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (164484) > buf size (4096) 00:07:41.018 [2024-06-07 22:58:33.199947] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:a0a000a0 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.018 [2024-06-07 22:58:33.199980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.018 [2024-06-07 22:58:33.200051] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:a0a000a0 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.018 [2024-06-07 22:58:33.200073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.018 #24 NEW cov: 12159 ft: 14953 corp: 19/392b lim: 30 exec/s: 24 rss: 73Mb L: 16/30 MS: 1 EraseBytes- 00:07:41.018 [2024-06-07 22:58:33.269846] ctrlr.c:2626:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (164484) > buf size (4096) 00:07:41.018 [2024-06-07 22:58:33.269984] ctrlr.c:2626:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (950916) > buf size (4096) 00:07:41.018 [2024-06-07 22:58:33.270114] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xa03f 00:07:41.018 [2024-06-07 22:58:33.270241] ctrlr.c:2626:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (164484) > buf size (4096) 00:07:41.018 [2024-06-07 22:58:33.270522] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:a0a000a0 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.018 [2024-06-07 22:58:33.270554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.018 [2024-06-07 22:58:33.270628] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:a0a083a0 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.018 [2024-06-07 22:58:33.270648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.018 [2024-06-07 22:58:33.270712] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:a0a000a0 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.018 [2024-06-07 22:58:33.270730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:41.018 [2024-06-07 22:58:33.270794] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:a0a000a0 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.018 [2024-06-07 22:58:33.270811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:41.331 #25 NEW cov: 12159 ft: 14960 corp: 20/419b lim: 30 exec/s: 25 rss: 73Mb L: 27/30 MS: 1 InsertByte- 00:07:41.331 [2024-06-07 22:58:33.340044] ctrlr.c:2626:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (164484) > buf size (4096) 00:07:41.331 [2024-06-07 22:58:33.340183] ctrlr.c:2626:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (164484) > buf size (4096) 00:07:41.331 [2024-06-07 22:58:33.340310] ctrlr.c:2626:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (164484) > buf size (4096) 00:07:41.331 [2024-06-07 22:58:33.340435] ctrlr.c:2626:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (164484) > buf size (4096) 00:07:41.331 [2024-06-07 22:58:33.340687] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:a0a000a0 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.331 [2024-06-07 22:58:33.340718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.331 [2024-06-07 22:58:33.340789] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:a0a000a0 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.331 [2024-06-07 22:58:33.340807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.331 [2024-06-07 22:58:33.340873] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:a0a000a0 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.331 [2024-06-07 22:58:33.340890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:41.331 [2024-06-07 22:58:33.340956] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:a0a000b1 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.331 [2024-06-07 22:58:33.340974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:41.331 #26 NEW cov: 12159 ft: 15025 corp: 21/446b lim: 30 exec/s: 26 rss: 73Mb L: 27/30 MS: 1 InsertByte- 00:07:41.331 [2024-06-07 22:58:33.390116] ctrlr.c:2626:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (164484) > buf size (4096) 00:07:41.331 [2024-06-07 22:58:33.390254] ctrlr.c:2626:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (163940) > buf size (4096) 00:07:41.331 [2024-06-07 22:58:33.390384] ctrlr.c:2626:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (950916) > buf size (4096) 00:07:41.331 [2024-06-07 22:58:33.390516] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xa02f 00:07:41.331 [2024-06-07 22:58:33.390790] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:a0a000a0 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.331 [2024-06-07 22:58:33.390823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.331 [2024-06-07 22:58:33.390890] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:a0180000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.331 [2024-06-07 22:58:33.390909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.331 [2024-06-07 22:58:33.390973] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:a0a083a0 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.331 [2024-06-07 22:58:33.390991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:41.331 [2024-06-07 22:58:33.391055] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:a0a000a0 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.331 [2024-06-07 22:58:33.391072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:41.331 #27 NEW cov: 12159 ft: 15088 corp: 22/470b lim: 30 exec/s: 27 rss: 74Mb L: 24/30 MS: 1 ShuffleBytes- 00:07:41.331 [2024-06-07 22:58:33.460282] ctrlr.c:2626:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (418404) > buf size (4096) 00:07:41.331 [2024-06-07 22:58:33.460420] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100004d4d 00:07:41.331 [2024-06-07 22:58:33.460682] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:9898814d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.331 [2024-06-07 22:58:33.460715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.331 [2024-06-07 22:58:33.460785] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00008100 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.331 [2024-06-07 22:58:33.460803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.331 #28 NEW cov: 12159 ft: 15103 corp: 23/485b lim: 30 exec/s: 28 rss: 74Mb L: 15/30 MS: 1 EraseBytes- 00:07:41.331 [2024-06-07 22:58:33.530463] ctrlr.c:2626:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (418404) > buf size (4096) 00:07:41.331 [2024-06-07 22:58:33.530611] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100004d4d 00:07:41.332 [2024-06-07 22:58:33.530882] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:9898814c cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.332 [2024-06-07 22:58:33.530915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.332 [2024-06-07 22:58:33.530986] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00008100 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.332 [2024-06-07 22:58:33.531004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.332 #29 NEW cov: 12159 ft: 15112 corp: 24/500b lim: 30 exec/s: 29 rss: 74Mb L: 15/30 MS: 1 ChangeBit- 00:07:41.610 [2024-06-07 22:58:33.600739] ctrlr.c:2626:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (164484) > buf size (4096) 00:07:41.610 [2024-06-07 22:58:33.600881] ctrlr.c:2626:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (164484) > buf size (4096) 00:07:41.610 [2024-06-07 22:58:33.601015] ctrlr.c:2626:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (164484) > buf size (4096) 00:07:41.610 [2024-06-07 22:58:33.601140] ctrlr.c:2626:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (65156) > buf size (4096) 00:07:41.610 [2024-06-07 22:58:33.601408] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:a0a000a0 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.610 [2024-06-07 22:58:33.601441] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.610 [2024-06-07 22:58:33.601514] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:a0a000a0 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.610 [2024-06-07 22:58:33.601532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.610 [2024-06-07 22:58:33.601604] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:a0a000a0 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.610 [2024-06-07 22:58:33.601623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:41.610 [2024-06-07 22:58:33.601688] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:3fa000a0 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.610 [2024-06-07 22:58:33.601705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:41.610 #30 NEW cov: 12159 ft: 15129 corp: 25/529b lim: 30 exec/s: 30 rss: 74Mb L: 29/30 MS: 1 CrossOver- 00:07:41.610 [2024-06-07 22:58:33.670807] ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100004d4d 00:07:41.610 [2024-06-07 22:58:33.671066] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:9898814d cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.610 [2024-06-07 22:58:33.671099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.610 #31 NEW cov: 12159 ft: 15154 corp: 26/538b lim: 30 exec/s: 31 rss: 74Mb L: 9/30 MS: 1 EraseBytes- 00:07:41.610 [2024-06-07 22:58:33.741152] ctrlr.c:2626:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (164484) > buf size (4096) 00:07:41.610 [2024-06-07 22:58:33.741291] ctrlr.c:2626:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (164484) > buf size (4096) 00:07:41.610 [2024-06-07 22:58:33.741420] ctrlr.c:2626:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (164484) > buf size (4096) 00:07:41.610 [2024-06-07 22:58:33.741547] ctrlr.c:2626:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (164096) > buf size (4096) 00:07:41.610 [2024-06-07 22:58:33.741816] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:a0a000a0 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.610 [2024-06-07 22:58:33.741849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.610 [2024-06-07 22:58:33.741920] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:a0a000a0 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.610 [2024-06-07 22:58:33.741938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.610 [2024-06-07 22:58:33.742005] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:a0a000a0 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.610 [2024-06-07 22:58:33.742024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:41.610 [2024-06-07 22:58:33.742090] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:a03f00a0 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.610 [2024-06-07 22:58:33.742112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:41.610 #32 pulse cov: 12159 ft: 15216 corp: 26/538b lim: 30 exec/s: 16 rss: 74Mb 00:07:41.610 #32 NEW cov: 12159 ft: 15216 corp: 27/567b lim: 30 exec/s: 16 rss: 74Mb L: 29/30 MS: 1 CopyPart- 00:07:41.610 #32 DONE cov: 12159 ft: 15216 corp: 27/567b lim: 30 exec/s: 16 rss: 74Mb 00:07:41.610 Done 32 runs in 2 second(s) 00:07:41.870 22:58:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_1.conf /var/tmp/suppress_nvmf_fuzz 00:07:41.870 22:58:33 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:41.870 22:58:33 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:41.870 22:58:33 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:07:41.870 22:58:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=2 00:07:41.870 22:58:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:41.870 22:58:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:41.870 22:58:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:07:41.870 22:58:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_2.conf 00:07:41.870 22:58:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:41.870 22:58:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:41.870 22:58:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 2 00:07:41.870 22:58:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4402 00:07:41.870 22:58:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:07:41.870 22:58:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' 00:07:41.870 22:58:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4402"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:41.870 22:58:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:41.870 22:58:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:41.870 22:58:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' -c /tmp/fuzz_json_2.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 -Z 2 00:07:41.870 [2024-06-07 22:58:33.954087] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:07:41.870 [2024-06-07 22:58:33.954159] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4149801 ] 00:07:41.870 EAL: No free 2048 kB hugepages reported on node 1 00:07:42.129 [2024-06-07 22:58:34.261160] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.129 [2024-06-07 22:58:34.360172] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.388 [2024-06-07 22:58:34.422541] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:42.388 [2024-06-07 22:58:34.438918] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4402 *** 00:07:42.388 INFO: Running with entropic power schedule (0xFF, 100). 00:07:42.388 INFO: Seed: 3299387737 00:07:42.388 INFO: Loaded 1 modules (357552 inline 8-bit counters): 357552 [0x29a2fcc, 0x29fa47c), 00:07:42.388 INFO: Loaded 1 PC tables (357552 PCs): 357552 [0x29fa480,0x2f6ef80), 00:07:42.389 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:07:42.389 INFO: A corpus is not provided, starting from an empty corpus 00:07:42.389 #2 INITED exec/s: 0 rss: 63Mb 00:07:42.389 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:42.389 This may also happen if the target rejected all inputs we tried so far 00:07:42.389 [2024-06-07 22:58:34.515766] ctrlr.c:2708:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:42.389 [2024-06-07 22:58:34.516263] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a000024 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.389 [2024-06-07 22:58:34.516312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.389 [2024-06-07 22:58:34.516395] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.389 [2024-06-07 22:58:34.516419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.957 NEW_FUNC[1/685]: 0x486230 in fuzz_admin_identify_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:95 00:07:42.957 NEW_FUNC[2/685]: 0x4bf580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:42.957 #6 NEW cov: 11840 ft: 11808 corp: 2/20b lim: 35 exec/s: 0 rss: 70Mb L: 19/19 MS: 4 InsertByte-InsertByte-CrossOver-InsertRepeatedBytes- 00:07:42.957 [2024-06-07 22:58:34.976486] ctrlr.c:2708:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:42.957 [2024-06-07 22:58:34.976932] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a000024 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.957 [2024-06-07 22:58:34.976981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.957 [2024-06-07 22:58:34.977125] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.957 [2024-06-07 22:58:34.977152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.957 #12 NEW cov: 11970 ft: 12471 corp: 3/38b lim: 35 exec/s: 0 rss: 71Mb L: 18/19 MS: 1 EraseBytes- 00:07:42.957 [2024-06-07 22:58:35.056529] ctrlr.c:2708:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:42.957 [2024-06-07 22:58:35.056961] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a000024 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.957 [2024-06-07 22:58:35.057001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.957 [2024-06-07 22:58:35.057136] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00060000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.957 [2024-06-07 22:58:35.057166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.957 #13 NEW cov: 11976 ft: 12896 corp: 4/56b lim: 35 exec/s: 0 rss: 71Mb L: 18/19 MS: 1 ChangeBinInt- 00:07:42.957 [2024-06-07 22:58:35.136882] ctrlr.c:2708:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:42.957 [2024-06-07 22:58:35.137612] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a000024 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.957 [2024-06-07 22:58:35.137653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.957 [2024-06-07 22:58:35.137787] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00d30000 cdw11:d300d3d3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.957 [2024-06-07 22:58:35.137816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.957 [2024-06-07 22:58:35.137957] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:d3d300d3 cdw11:d300d3d3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.957 [2024-06-07 22:58:35.137985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:42.957 [2024-06-07 22:58:35.138118] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:d3d300d3 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.957 [2024-06-07 22:58:35.138141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:42.957 #14 NEW cov: 12061 ft: 13713 corp: 5/90b lim: 35 exec/s: 0 rss: 71Mb L: 34/34 MS: 1 InsertRepeatedBytes- 00:07:42.957 [2024-06-07 22:58:35.206948] ctrlr.c:2708:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:42.957 [2024-06-07 22:58:35.207354] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a000024 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.957 [2024-06-07 22:58:35.207394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.957 [2024-06-07 22:58:35.207527] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.957 [2024-06-07 22:58:35.207555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.217 #15 NEW cov: 12061 ft: 13936 corp: 6/108b lim: 35 exec/s: 0 rss: 71Mb L: 18/34 MS: 1 ChangeBinInt- 00:07:43.217 [2024-06-07 22:58:35.267451] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a000024 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.217 [2024-06-07 22:58:35.267488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.217 [2024-06-07 22:58:35.267613] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:000000a9 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.217 [2024-06-07 22:58:35.267637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.217 #16 NEW cov: 12061 ft: 13985 corp: 7/127b lim: 35 exec/s: 0 rss: 71Mb L: 19/34 MS: 1 InsertByte- 00:07:43.217 [2024-06-07 22:58:35.328001] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:4f4f004f cdw11:4f004f4f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.217 [2024-06-07 22:58:35.328038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.217 [2024-06-07 22:58:35.328174] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:4f4f004f cdw11:4f004f4f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.217 [2024-06-07 22:58:35.328199] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.217 [2024-06-07 22:58:35.328332] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:4f4f004f cdw11:4f004f4f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.217 [2024-06-07 22:58:35.328356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.217 #18 NEW cov: 12061 ft: 14263 corp: 8/151b lim: 35 exec/s: 0 rss: 71Mb L: 24/34 MS: 2 InsertByte-InsertRepeatedBytes- 00:07:43.217 [2024-06-07 22:58:35.387652] ctrlr.c:2708:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:43.217 [2024-06-07 22:58:35.388246] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a000024 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.217 [2024-06-07 22:58:35.388285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.217 [2024-06-07 22:58:35.388417] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:0e000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.217 [2024-06-07 22:58:35.388445] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.217 [2024-06-07 22:58:35.388585] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:7e4c0045 cdw11:0000131a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.217 [2024-06-07 22:58:35.388608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.217 NEW_FUNC[1/1]: 0x1a74020 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:601 00:07:43.217 #19 NEW cov: 12084 ft: 14416 corp: 9/177b lim: 35 exec/s: 0 rss: 71Mb L: 26/34 MS: 1 CMP- DE: "\000\016>E~L\023\032"- 00:07:43.217 [2024-06-07 22:58:35.467788] ctrlr.c:2708:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:43.217 [2024-06-07 22:58:35.468191] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a000024 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.217 [2024-06-07 22:58:35.468230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.217 [2024-06-07 22:58:35.468357] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00060000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.217 [2024-06-07 22:58:35.468389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.476 #20 NEW cov: 12084 ft: 14484 corp: 10/195b lim: 35 exec/s: 20 rss: 71Mb L: 18/34 MS: 1 ChangeBit- 00:07:43.476 [2024-06-07 22:58:35.548088] ctrlr.c:2708:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:43.476 [2024-06-07 22:58:35.548811] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a000024 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.476 [2024-06-07 22:58:35.548848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.476 [2024-06-07 22:58:35.548988] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:0e000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.476 [2024-06-07 22:58:35.549017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.476 [2024-06-07 22:58:35.549156] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:7e4c0045 cdw11:ea0013ea SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.476 [2024-06-07 22:58:35.549180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.476 [2024-06-07 22:58:35.549318] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:eaea00ea cdw11:0000ea1a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.476 [2024-06-07 22:58:35.549342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:43.476 #21 NEW cov: 12084 ft: 14531 corp: 11/228b lim: 35 exec/s: 21 rss: 72Mb L: 33/34 MS: 1 InsertRepeatedBytes- 00:07:43.476 [2024-06-07 22:58:35.628238] ctrlr.c:2708:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:43.476 [2024-06-07 22:58:35.628663] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a000024 cdw11:00001200 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.476 [2024-06-07 22:58:35.628701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.476 [2024-06-07 22:58:35.628832] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00060000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.476 [2024-06-07 22:58:35.628858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.476 #22 NEW cov: 12084 ft: 14543 corp: 12/246b lim: 35 exec/s: 22 rss: 72Mb L: 18/34 MS: 1 ChangeBinInt- 00:07:43.476 [2024-06-07 22:58:35.688743] ctrlr.c:2708:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:43.476 [2024-06-07 22:58:35.688930] ctrlr.c:2708:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:43.476 [2024-06-07 22:58:35.689339] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a000024 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.476 [2024-06-07 22:58:35.689378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.476 [2024-06-07 22:58:35.689513] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:000000a9 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.476 [2024-06-07 22:58:35.689535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.476 [2024-06-07 22:58:35.689666] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:000000a9 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.476 [2024-06-07 22:58:35.689692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.476 [2024-06-07 22:58:35.689831] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.476 [2024-06-07 22:58:35.689858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:43.476 #23 NEW cov: 12084 ft: 14576 corp: 13/276b lim: 35 exec/s: 23 rss: 72Mb L: 30/34 MS: 1 CopyPart- 00:07:43.736 [2024-06-07 22:58:35.768688] ctrlr.c:2708:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:43.736 [2024-06-07 22:58:35.769262] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a000024 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.736 [2024-06-07 22:58:35.769301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.736 [2024-06-07 22:58:35.769443] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00060000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.736 [2024-06-07 22:58:35.769471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.736 [2024-06-07 22:58:35.769610] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:3e45000e cdw11:13007e4c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.736 [2024-06-07 22:58:35.769633] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.736 #24 NEW cov: 12084 ft: 14596 corp: 14/302b lim: 35 exec/s: 24 rss: 72Mb L: 26/34 MS: 1 PersAutoDict- DE: "\000\016>E~L\023\032"- 00:07:43.736 [2024-06-07 22:58:35.849307] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a000024 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.736 [2024-06-07 22:58:35.849343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.736 [2024-06-07 22:58:35.849476] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000012 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.736 [2024-06-07 22:58:35.849499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.736 #25 NEW cov: 12084 ft: 14618 corp: 15/320b lim: 35 exec/s: 25 rss: 72Mb L: 18/34 MS: 1 ChangeBinInt- 00:07:43.736 [2024-06-07 22:58:35.909044] ctrlr.c:2708:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:43.736 [2024-06-07 22:58:35.909443] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a000024 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.736 [2024-06-07 22:58:35.909486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.736 [2024-06-07 22:58:35.909616] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00fb0000 cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.736 [2024-06-07 22:58:35.909644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.736 #26 NEW cov: 12084 ft: 14643 corp: 16/338b lim: 35 exec/s: 26 rss: 72Mb L: 18/34 MS: 1 ChangeBinInt- 00:07:43.736 [2024-06-07 22:58:35.969338] ctrlr.c:2708:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:43.736 [2024-06-07 22:58:35.969925] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a000024 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.736 [2024-06-07 22:58:35.969963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.736 [2024-06-07 22:58:35.970100] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:0e000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.736 [2024-06-07 22:58:35.970132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.736 [2024-06-07 22:58:35.970266] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:7e4c0045 cdw11:0000131a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.736 [2024-06-07 22:58:35.970290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.736 #27 NEW cov: 12084 ft: 14667 corp: 17/363b lim: 35 exec/s: 27 rss: 72Mb L: 25/34 MS: 1 EraseBytes- 00:07:43.996 [2024-06-07 22:58:36.030131] ctrlr.c:2708:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:43.996 [2024-06-07 22:58:36.030538] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a000024 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.996 [2024-06-07 22:58:36.030573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.996 [2024-06-07 22:58:36.030718] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000012 cdw11:40000040 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.996 [2024-06-07 22:58:36.030739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.996 [2024-06-07 22:58:36.030877] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:40400040 cdw11:40004040 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.996 [2024-06-07 22:58:36.030899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.996 [2024-06-07 22:58:36.031034] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:40400040 cdw11:40004040 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.996 [2024-06-07 22:58:36.031055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:43.996 [2024-06-07 22:58:36.031194] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:f8000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.996 [2024-06-07 22:58:36.031223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:43.996 #28 NEW cov: 12084 ft: 14728 corp: 18/398b lim: 35 exec/s: 28 rss: 72Mb L: 35/35 MS: 1 InsertRepeatedBytes- 00:07:43.996 [2024-06-07 22:58:36.110113] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:68680068 cdw11:68006868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.996 [2024-06-07 22:58:36.110156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.996 [2024-06-07 22:58:36.110295] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:68680068 cdw11:68006868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.996 [2024-06-07 22:58:36.110319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.996 #32 NEW cov: 12084 ft: 14738 corp: 19/417b lim: 35 exec/s: 32 rss: 72Mb L: 19/35 MS: 4 ShuffleBytes-ShuffleBytes-ChangeBinInt-InsertRepeatedBytes- 00:07:43.996 [2024-06-07 22:58:36.170462] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:4f4f004f cdw11:4f004f4f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.996 [2024-06-07 22:58:36.170497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.996 [2024-06-07 22:58:36.170630] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:4f4f004f cdw11:4f004f4f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.996 [2024-06-07 22:58:36.170652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.996 [2024-06-07 22:58:36.170780] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:4f4f004f cdw11:4f004f4f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.996 [2024-06-07 22:58:36.170802] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.996 #33 NEW cov: 12084 ft: 14762 corp: 20/441b lim: 35 exec/s: 33 rss: 72Mb L: 24/35 MS: 1 ChangeBit- 00:07:43.996 [2024-06-07 22:58:36.250487] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a000024 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.996 [2024-06-07 22:58:36.250525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.996 [2024-06-07 22:58:36.250663] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:000000a9 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.996 [2024-06-07 22:58:36.250685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.255 #34 NEW cov: 12084 ft: 14783 corp: 21/460b lim: 35 exec/s: 34 rss: 72Mb L: 19/35 MS: 1 ChangeBinInt- 00:07:44.255 [2024-06-07 22:58:36.310431] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a4f0007 cdw11:4f004f4f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.255 [2024-06-07 22:58:36.310468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.255 #37 NEW cov: 12084 ft: 15085 corp: 22/469b lim: 35 exec/s: 37 rss: 72Mb L: 9/35 MS: 3 ShuffleBytes-InsertByte-CrossOver- 00:07:44.255 [2024-06-07 22:58:36.370664] ctrlr.c:2708:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:44.255 [2024-06-07 22:58:36.371018] ctrlr.c:2708:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:44.255 [2024-06-07 22:58:36.371427] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a000024 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.255 [2024-06-07 22:58:36.371464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.255 [2024-06-07 22:58:36.371606] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:0e3e0000 cdw11:4c00457e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.255 [2024-06-07 22:58:36.371637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.255 [2024-06-07 22:58:36.371768] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:0000001a cdw11:00000600 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.255 [2024-06-07 22:58:36.371795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.255 [2024-06-07 22:58:36.371928] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:0e3e0000 cdw11:4c00457e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.256 [2024-06-07 22:58:36.371959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:44.256 #38 NEW cov: 12084 ft: 15102 corp: 23/503b lim: 35 exec/s: 38 rss: 72Mb L: 34/35 MS: 1 PersAutoDict- DE: "\000\016>E~L\023\032"- 00:07:44.256 [2024-06-07 22:58:36.450814] ctrlr.c:2708:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:44.256 [2024-06-07 22:58:36.451007] ctrlr.c:2708:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:44.256 [2024-06-07 22:58:36.451418] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0000002a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.256 [2024-06-07 22:58:36.451455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.256 [2024-06-07 22:58:36.451587] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.256 [2024-06-07 22:58:36.451618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.256 [2024-06-07 22:58:36.451749] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.256 [2024-06-07 22:58:36.451771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.256 #41 NEW cov: 12084 ft: 15119 corp: 24/528b lim: 35 exec/s: 41 rss: 73Mb L: 25/35 MS: 3 ChangeBit-CrossOver-InsertRepeatedBytes- 00:07:44.256 [2024-06-07 22:58:36.511084] ctrlr.c:2708:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:44.256 [2024-06-07 22:58:36.511840] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a000024 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.256 [2024-06-07 22:58:36.511876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.256 [2024-06-07 22:58:36.512010] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:0e000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.256 [2024-06-07 22:58:36.512038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.256 [2024-06-07 22:58:36.512170] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:7e4c0045 cdw11:ea0013ea SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.256 [2024-06-07 22:58:36.512195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.256 [2024-06-07 22:58:36.512327] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:eaea00ea cdw11:0000ea1a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.256 [2024-06-07 22:58:36.512351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:44.515 #42 NEW cov: 12084 ft: 15131 corp: 25/561b lim: 35 exec/s: 21 rss: 73Mb L: 33/35 MS: 1 ChangeByte- 00:07:44.515 #42 DONE cov: 12084 ft: 15131 corp: 25/561b lim: 35 exec/s: 21 rss: 73Mb 00:07:44.515 ###### Recommended dictionary. ###### 00:07:44.515 "\000\016>E~L\023\032" # Uses: 2 00:07:44.515 ###### End of recommended dictionary. ###### 00:07:44.515 Done 42 runs in 2 second(s) 00:07:44.515 22:58:36 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_2.conf /var/tmp/suppress_nvmf_fuzz 00:07:44.515 22:58:36 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:44.515 22:58:36 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:44.515 22:58:36 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:07:44.515 22:58:36 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=3 00:07:44.515 22:58:36 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:44.515 22:58:36 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:44.515 22:58:36 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:07:44.515 22:58:36 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_3.conf 00:07:44.515 22:58:36 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:44.516 22:58:36 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:44.516 22:58:36 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 3 00:07:44.516 22:58:36 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4403 00:07:44.516 22:58:36 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:07:44.516 22:58:36 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' 00:07:44.516 22:58:36 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4403"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:44.516 22:58:36 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:44.516 22:58:36 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:44.516 22:58:36 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' -c /tmp/fuzz_json_3.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 -Z 3 00:07:44.516 [2024-06-07 22:58:36.748105] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:07:44.516 [2024-06-07 22:58:36.748169] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4150336 ] 00:07:44.775 EAL: No free 2048 kB hugepages reported on node 1 00:07:45.034 [2024-06-07 22:58:37.058434] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.034 [2024-06-07 22:58:37.164403] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.034 [2024-06-07 22:58:37.226816] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:45.034 [2024-06-07 22:58:37.243192] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4403 *** 00:07:45.034 INFO: Running with entropic power schedule (0xFF, 100). 00:07:45.034 INFO: Seed: 1807409126 00:07:45.034 INFO: Loaded 1 modules (357552 inline 8-bit counters): 357552 [0x29a2fcc, 0x29fa47c), 00:07:45.034 INFO: Loaded 1 PC tables (357552 PCs): 357552 [0x29fa480,0x2f6ef80), 00:07:45.034 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:07:45.034 INFO: A corpus is not provided, starting from an empty corpus 00:07:45.034 #2 INITED exec/s: 0 rss: 63Mb 00:07:45.034 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:45.034 This may also happen if the target rejected all inputs we tried so far 00:07:45.553 NEW_FUNC[1/674]: 0x487f00 in fuzz_admin_abort_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:114 00:07:45.553 NEW_FUNC[2/674]: 0x4bf580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:45.553 #9 NEW cov: 11754 ft: 11755 corp: 2/20b lim: 20 exec/s: 0 rss: 70Mb L: 19/19 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:07:45.812 #10 NEW cov: 11884 ft: 12455 corp: 3/40b lim: 20 exec/s: 0 rss: 70Mb L: 20/20 MS: 1 CopyPart- 00:07:45.812 #11 NEW cov: 11890 ft: 12803 corp: 4/60b lim: 20 exec/s: 0 rss: 70Mb L: 20/20 MS: 1 CrossOver- 00:07:45.812 #12 NEW cov: 11975 ft: 13126 corp: 5/80b lim: 20 exec/s: 0 rss: 70Mb L: 20/20 MS: 1 CMP- DE: "\000\000\000\037"- 00:07:45.812 #13 NEW cov: 11975 ft: 13169 corp: 6/99b lim: 20 exec/s: 0 rss: 70Mb L: 19/20 MS: 1 ChangeBinInt- 00:07:46.071 #14 NEW cov: 11975 ft: 13264 corp: 7/118b lim: 20 exec/s: 0 rss: 70Mb L: 19/20 MS: 1 ChangeByte- 00:07:46.071 NEW_FUNC[1/1]: 0x1a74020 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:601 00:07:46.071 #15 NEW cov: 11998 ft: 13421 corp: 8/137b lim: 20 exec/s: 0 rss: 70Mb L: 19/20 MS: 1 ChangeBinInt- 00:07:46.071 #16 NEW cov: 11998 ft: 13448 corp: 9/156b lim: 20 exec/s: 0 rss: 70Mb L: 19/20 MS: 1 ChangeBit- 00:07:46.071 #17 NEW cov: 11998 ft: 13529 corp: 10/176b lim: 20 exec/s: 17 rss: 70Mb L: 20/20 MS: 1 CopyPart- 00:07:46.330 #18 NEW cov: 11998 ft: 13568 corp: 11/196b lim: 20 exec/s: 18 rss: 70Mb L: 20/20 MS: 1 PersAutoDict- DE: "\000\000\000\037"- 00:07:46.330 #19 NEW cov: 11998 ft: 13656 corp: 12/216b lim: 20 exec/s: 19 rss: 71Mb L: 20/20 MS: 1 ChangeByte- 00:07:46.330 #20 NEW cov: 11998 ft: 13688 corp: 13/236b lim: 20 exec/s: 20 rss: 71Mb L: 20/20 MS: 1 CopyPart- 00:07:46.589 #21 NEW cov: 11998 ft: 13727 corp: 14/256b lim: 20 exec/s: 21 rss: 71Mb L: 20/20 MS: 1 ChangeBit- 00:07:46.589 #22 NEW cov: 11998 ft: 13823 corp: 15/275b lim: 20 exec/s: 22 rss: 71Mb L: 19/20 MS: 1 ChangeByte- 00:07:46.589 #23 NEW cov: 11998 ft: 13835 corp: 16/295b lim: 20 exec/s: 23 rss: 71Mb L: 20/20 MS: 1 ChangeByte- 00:07:46.589 #24 NEW cov: 11998 ft: 13915 corp: 17/314b lim: 20 exec/s: 24 rss: 71Mb L: 19/20 MS: 1 ChangeBit- 00:07:46.848 #25 NEW cov: 11998 ft: 13937 corp: 18/333b lim: 20 exec/s: 25 rss: 71Mb L: 19/20 MS: 1 ChangeByte- 00:07:46.848 #26 NEW cov: 11998 ft: 13944 corp: 19/353b lim: 20 exec/s: 26 rss: 71Mb L: 20/20 MS: 1 CrossOver- 00:07:46.848 #27 NEW cov: 11998 ft: 13950 corp: 20/373b lim: 20 exec/s: 27 rss: 71Mb L: 20/20 MS: 1 ChangeByte- 00:07:46.848 #28 NEW cov: 11998 ft: 13967 corp: 21/393b lim: 20 exec/s: 28 rss: 72Mb L: 20/20 MS: 1 ChangeBit- 00:07:47.108 #30 NEW cov: 11998 ft: 14359 corp: 22/398b lim: 20 exec/s: 30 rss: 72Mb L: 5/20 MS: 2 ChangeByte-PersAutoDict- DE: "\000\000\000\037"- 00:07:47.108 #33 NEW cov: 11998 ft: 14429 corp: 23/405b lim: 20 exec/s: 33 rss: 72Mb L: 7/20 MS: 3 ChangeByte-ChangeBinInt-CrossOver- 00:07:47.108 #34 NEW cov: 11998 ft: 14454 corp: 24/425b lim: 20 exec/s: 17 rss: 72Mb L: 20/20 MS: 1 CopyPart- 00:07:47.108 #34 DONE cov: 11998 ft: 14454 corp: 24/425b lim: 20 exec/s: 17 rss: 72Mb 00:07:47.108 ###### Recommended dictionary. ###### 00:07:47.108 "\000\000\000\037" # Uses: 2 00:07:47.108 ###### End of recommended dictionary. ###### 00:07:47.108 Done 34 runs in 2 second(s) 00:07:47.368 22:58:39 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_3.conf /var/tmp/suppress_nvmf_fuzz 00:07:47.368 22:58:39 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:47.368 22:58:39 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:47.368 22:58:39 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:07:47.368 22:58:39 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=4 00:07:47.368 22:58:39 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:47.368 22:58:39 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:47.368 22:58:39 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:07:47.368 22:58:39 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_4.conf 00:07:47.368 22:58:39 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:47.368 22:58:39 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:47.368 22:58:39 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 4 00:07:47.368 22:58:39 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4404 00:07:47.368 22:58:39 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:07:47.368 22:58:39 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' 00:07:47.368 22:58:39 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4404"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:47.368 22:58:39 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:47.368 22:58:39 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:47.368 22:58:39 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' -c /tmp/fuzz_json_4.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 -Z 4 00:07:47.368 [2024-06-07 22:58:39.478206] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:07:47.368 [2024-06-07 22:58:39.478292] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4150867 ] 00:07:47.368 EAL: No free 2048 kB hugepages reported on node 1 00:07:47.627 [2024-06-07 22:58:39.791176] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.627 [2024-06-07 22:58:39.897178] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.886 [2024-06-07 22:58:39.959509] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:47.886 [2024-06-07 22:58:39.975878] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4404 *** 00:07:47.886 INFO: Running with entropic power schedule (0xFF, 100). 00:07:47.886 INFO: Seed: 243461605 00:07:47.886 INFO: Loaded 1 modules (357552 inline 8-bit counters): 357552 [0x29a2fcc, 0x29fa47c), 00:07:47.886 INFO: Loaded 1 PC tables (357552 PCs): 357552 [0x29fa480,0x2f6ef80), 00:07:47.886 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:07:47.886 INFO: A corpus is not provided, starting from an empty corpus 00:07:47.886 #2 INITED exec/s: 0 rss: 63Mb 00:07:47.886 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:47.886 This may also happen if the target rejected all inputs we tried so far 00:07:47.886 [2024-06-07 22:58:40.025227] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:fdfdfdfd cdw11:fdfd0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.886 [2024-06-07 22:58:40.025264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.886 [2024-06-07 22:58:40.025335] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:fdfdfdfd cdw11:fdfd0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.886 [2024-06-07 22:58:40.025354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.886 [2024-06-07 22:58:40.025422] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:fdfdfdfd cdw11:fdfd0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.886 [2024-06-07 22:58:40.025439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:47.886 [2024-06-07 22:58:40.025506] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:fdfdfdfd cdw11:fdfd0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.886 [2024-06-07 22:58:40.025524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:48.455 NEW_FUNC[1/686]: 0x488ff0 in fuzz_admin_create_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:126 00:07:48.455 NEW_FUNC[2/686]: 0x4bf580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:48.455 #8 NEW cov: 11852 ft: 11853 corp: 2/34b lim: 35 exec/s: 0 rss: 70Mb L: 33/33 MS: 1 InsertRepeatedBytes- 00:07:48.455 [2024-06-07 22:58:40.476272] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:fdfdfdfd cdw11:fdfd0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.455 [2024-06-07 22:58:40.476321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.455 [2024-06-07 22:58:40.476398] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:fdfdfdfd cdw11:fdfd0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.455 [2024-06-07 22:58:40.476418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.455 [2024-06-07 22:58:40.476484] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:fdfd00fd cdw11:fdfd0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.455 [2024-06-07 22:58:40.476502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:48.455 [2024-06-07 22:58:40.476570] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:fdfdfdfd cdw11:fdfd0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.455 [2024-06-07 22:58:40.476594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:48.455 [2024-06-07 22:58:40.476656] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:fdfdfdfd cdw11:fdfd0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.455 [2024-06-07 22:58:40.476673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:48.455 #9 NEW cov: 11982 ft: 12536 corp: 3/69b lim: 35 exec/s: 0 rss: 70Mb L: 35/35 MS: 1 CMP- DE: "\001\000"- 00:07:48.456 [2024-06-07 22:58:40.546083] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:fdfdfdfd cdw11:fdfd0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.456 [2024-06-07 22:58:40.546118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.456 [2024-06-07 22:58:40.546187] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:fdfdfdfd cdw11:fdfd0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.456 [2024-06-07 22:58:40.546206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.456 [2024-06-07 22:58:40.546270] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:fdfdfdfd cdw11:fdfd0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.456 [2024-06-07 22:58:40.546287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:48.456 [2024-06-07 22:58:40.546354] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:fdfdfdfd cdw11:7afd0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.456 [2024-06-07 22:58:40.546372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:48.456 #10 NEW cov: 11988 ft: 12771 corp: 4/103b lim: 35 exec/s: 0 rss: 70Mb L: 34/35 MS: 1 InsertByte- 00:07:48.456 [2024-06-07 22:58:40.596410] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.456 [2024-06-07 22:58:40.596446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.456 [2024-06-07 22:58:40.596511] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.456 [2024-06-07 22:58:40.596529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.456 [2024-06-07 22:58:40.596597] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.456 [2024-06-07 22:58:40.596616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:48.456 [2024-06-07 22:58:40.596683] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.456 [2024-06-07 22:58:40.596705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:48.456 [2024-06-07 22:58:40.596770] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.456 [2024-06-07 22:58:40.596787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:48.456 #12 NEW cov: 12073 ft: 12983 corp: 5/138b lim: 35 exec/s: 0 rss: 70Mb L: 35/35 MS: 2 ChangeByte-InsertRepeatedBytes- 00:07:48.456 [2024-06-07 22:58:40.645865] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:fdfdfdfd cdw11:fdfd0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.456 [2024-06-07 22:58:40.645898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.456 #13 NEW cov: 12073 ft: 14077 corp: 6/147b lim: 35 exec/s: 0 rss: 70Mb L: 9/35 MS: 1 CrossOver- 00:07:48.456 [2024-06-07 22:58:40.716742] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:fdfdfdfd cdw11:fdfd0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.456 [2024-06-07 22:58:40.716777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.456 [2024-06-07 22:58:40.716843] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:fdfdfdfd cdw11:fdfd0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.456 [2024-06-07 22:58:40.716861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.456 [2024-06-07 22:58:40.716930] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:fdfdfdfd cdw11:fdfd0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.456 [2024-06-07 22:58:40.716947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:48.456 [2024-06-07 22:58:40.717013] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:fdfdfdfd cdw11:fd7a0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.456 [2024-06-07 22:58:40.717031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:48.456 [2024-06-07 22:58:40.717095] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:fdfdfdfd cdw11:fdfd0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.456 [2024-06-07 22:58:40.717112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:48.715 #14 NEW cov: 12073 ft: 14190 corp: 7/182b lim: 35 exec/s: 0 rss: 70Mb L: 35/35 MS: 1 InsertByte- 00:07:48.715 [2024-06-07 22:58:40.766945] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:fdfdfdfd cdw11:fdfd0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.715 [2024-06-07 22:58:40.766978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.715 [2024-06-07 22:58:40.767043] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:fdfdfdfd cdw11:fdfd0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.715 [2024-06-07 22:58:40.767062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.715 [2024-06-07 22:58:40.767131] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:fdfdfdfd cdw11:fdfd0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.715 [2024-06-07 22:58:40.767149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:48.715 [2024-06-07 22:58:40.767220] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:fdfdfdfd cdw11:fd7a0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.715 [2024-06-07 22:58:40.767238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:48.715 [2024-06-07 22:58:40.767303] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:fdfdfdfd cdw11:fdfd0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.715 [2024-06-07 22:58:40.767320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:48.716 #20 NEW cov: 12073 ft: 14266 corp: 8/217b lim: 35 exec/s: 0 rss: 70Mb L: 35/35 MS: 1 ShuffleBytes- 00:07:48.716 [2024-06-07 22:58:40.837123] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:fdfdfdfd cdw11:fdfd0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.716 [2024-06-07 22:58:40.837155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.716 [2024-06-07 22:58:40.837221] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:fdfdfdfd cdw11:fdfd0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.716 [2024-06-07 22:58:40.837240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.716 [2024-06-07 22:58:40.837304] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:fdfd08fd cdw11:fdfd0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.716 [2024-06-07 22:58:40.837322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:48.716 [2024-06-07 22:58:40.837384] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:fdfdfdfd cdw11:fdfd0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.716 [2024-06-07 22:58:40.837401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:48.716 [2024-06-07 22:58:40.837464] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:fdfdfdfd cdw11:fdfd0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.716 [2024-06-07 22:58:40.837482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:48.716 #21 NEW cov: 12073 ft: 14354 corp: 9/252b lim: 35 exec/s: 0 rss: 71Mb L: 35/35 MS: 1 ChangeBit- 00:07:48.716 [2024-06-07 22:58:40.907342] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000021 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.716 [2024-06-07 22:58:40.907374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.716 [2024-06-07 22:58:40.907439] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.716 [2024-06-07 22:58:40.907457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.716 [2024-06-07 22:58:40.907523] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.716 [2024-06-07 22:58:40.907540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:48.716 [2024-06-07 22:58:40.907612] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.716 [2024-06-07 22:58:40.907631] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:48.716 [2024-06-07 22:58:40.907696] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.716 [2024-06-07 22:58:40.907718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:48.716 NEW_FUNC[1/1]: 0x1a74020 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:601 00:07:48.716 #22 NEW cov: 12096 ft: 14420 corp: 10/287b lim: 35 exec/s: 0 rss: 71Mb L: 35/35 MS: 1 ChangeByte- 00:07:48.716 [2024-06-07 22:58:40.976952] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:fd00fdfd cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.716 [2024-06-07 22:58:40.976984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.716 [2024-06-07 22:58:40.977052] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00fd0000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.716 [2024-06-07 22:58:40.977070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.975 #23 NEW cov: 12096 ft: 14697 corp: 11/305b lim: 35 exec/s: 23 rss: 71Mb L: 18/35 MS: 1 CrossOver- 00:07:48.975 [2024-06-07 22:58:41.047526] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:fdfdfdfd cdw11:fdfd0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.975 [2024-06-07 22:58:41.047558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.975 [2024-06-07 22:58:41.047631] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:fdfdfdfd cdw11:fdfd0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.975 [2024-06-07 22:58:41.047650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.975 [2024-06-07 22:58:41.047712] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00fdfd01 cdw11:fdfd0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.975 [2024-06-07 22:58:41.047730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:48.975 [2024-06-07 22:58:41.047793] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:fdfdfdfd cdw11:fdfd0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.975 [2024-06-07 22:58:41.047810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:48.975 #24 NEW cov: 12096 ft: 14779 corp: 12/338b lim: 35 exec/s: 24 rss: 71Mb L: 33/35 MS: 1 PersAutoDict- DE: "\001\000"- 00:07:48.975 [2024-06-07 22:58:41.097276] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:fdfdfdfd cdw11:fdfd0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.975 [2024-06-07 22:58:41.097308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.975 [2024-06-07 22:58:41.097376] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:fdfdfdfd cdw11:fdfd0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.975 [2024-06-07 22:58:41.097394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.975 #25 NEW cov: 12096 ft: 14800 corp: 13/352b lim: 35 exec/s: 25 rss: 71Mb L: 14/35 MS: 1 CrossOver- 00:07:48.975 [2024-06-07 22:58:41.147417] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:fffdfdfd cdw11:fdfd0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.975 [2024-06-07 22:58:41.147449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.975 [2024-06-07 22:58:41.147516] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:fdfdfdfd cdw11:fdfd0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.975 [2024-06-07 22:58:41.147535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.975 #26 NEW cov: 12096 ft: 14818 corp: 14/366b lim: 35 exec/s: 26 rss: 71Mb L: 14/35 MS: 1 ChangeBit- 00:07:48.975 [2024-06-07 22:58:41.217966] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:fdfdfdfd cdw11:fdfd0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.975 [2024-06-07 22:58:41.217998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.975 [2024-06-07 22:58:41.218065] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:0afdfdfd cdw11:fdfd0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.975 [2024-06-07 22:58:41.218083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.975 [2024-06-07 22:58:41.218150] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:fdfdfdfd cdw11:fdfd0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.975 [2024-06-07 22:58:41.218168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:48.975 [2024-06-07 22:58:41.218235] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:fdfdfdfd cdw11:7afd0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.975 [2024-06-07 22:58:41.218253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:48.975 #32 NEW cov: 12096 ft: 14826 corp: 15/400b lim: 35 exec/s: 32 rss: 71Mb L: 34/35 MS: 1 CrossOver- 00:07:49.235 [2024-06-07 22:58:41.267780] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:fdfdfdfd cdw11:fdfd0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.235 [2024-06-07 22:58:41.267812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.235 [2024-06-07 22:58:41.267876] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:fdfdfdfd cdw11:fdfd0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.235 [2024-06-07 22:58:41.267895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.235 #33 NEW cov: 12096 ft: 14837 corp: 16/418b lim: 35 exec/s: 33 rss: 71Mb L: 18/35 MS: 1 EraseBytes- 00:07:49.235 [2024-06-07 22:58:41.318299] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:fdfdfdfd cdw11:fdfd0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.235 [2024-06-07 22:58:41.318330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.235 [2024-06-07 22:58:41.318397] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:fdfdfdfd cdw11:fdfd0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.235 [2024-06-07 22:58:41.318415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.235 [2024-06-07 22:58:41.318478] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00fdfd01 cdw11:fdfd0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.235 [2024-06-07 22:58:41.318495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:49.235 [2024-06-07 22:58:41.318558] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:fdfdfdfd cdw11:fdfd0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.235 [2024-06-07 22:58:41.318583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:49.235 #34 NEW cov: 12096 ft: 14864 corp: 17/451b lim: 35 exec/s: 34 rss: 71Mb L: 33/35 MS: 1 ChangeBinInt- 00:07:49.235 [2024-06-07 22:58:41.388685] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:fdfdfdfd cdw11:fdfd0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.235 [2024-06-07 22:58:41.388722] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.235 [2024-06-07 22:58:41.388789] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:fdfdfdfd cdw11:fdfd0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.235 [2024-06-07 22:58:41.388807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.235 [2024-06-07 22:58:41.388875] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:fdfdfdfd cdw11:fdfd0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.235 [2024-06-07 22:58:41.388893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:49.235 [2024-06-07 22:58:41.388958] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:fdfdfdfd cdw11:fd7a0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.235 [2024-06-07 22:58:41.388976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:49.235 [2024-06-07 22:58:41.389044] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:fdfdfdfd cdw11:fdfd0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.235 [2024-06-07 22:58:41.389062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:49.235 #35 NEW cov: 12096 ft: 14918 corp: 18/486b lim: 35 exec/s: 35 rss: 71Mb L: 35/35 MS: 1 ChangeByte- 00:07:49.235 [2024-06-07 22:58:41.438621] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:fdfdfdfd cdw11:fdfd0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.235 [2024-06-07 22:58:41.438653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.235 [2024-06-07 22:58:41.438718] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:fdfdfdfd cdw11:fdfd0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.235 [2024-06-07 22:58:41.438736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.235 [2024-06-07 22:58:41.438802] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00fdfd01 cdw11:fdfd0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.235 [2024-06-07 22:58:41.438820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:49.235 [2024-06-07 22:58:41.438883] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:fdfdfdfd cdw11:fdfd0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.235 [2024-06-07 22:58:41.438901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:49.235 #36 NEW cov: 12096 ft: 14928 corp: 19/519b lim: 35 exec/s: 36 rss: 71Mb L: 33/35 MS: 1 ShuffleBytes- 00:07:49.235 [2024-06-07 22:58:41.488732] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:fd28fdfd cdw11:fdfd0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.235 [2024-06-07 22:58:41.488764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.235 [2024-06-07 22:58:41.488832] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:fdfdfdfd cdw11:fdfd0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.235 [2024-06-07 22:58:41.488850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.236 [2024-06-07 22:58:41.488915] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:0100fdfd cdw11:fdfd0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.236 [2024-06-07 22:58:41.488932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:49.236 [2024-06-07 22:58:41.489000] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:fdfdfdfd cdw11:fdfd0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.236 [2024-06-07 22:58:41.489019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:49.495 #37 NEW cov: 12096 ft: 14962 corp: 20/553b lim: 35 exec/s: 37 rss: 71Mb L: 34/35 MS: 1 InsertByte- 00:07:49.495 [2024-06-07 22:58:41.559020] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:fdfdfdfd cdw11:fdfd0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.495 [2024-06-07 22:58:41.559052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.495 [2024-06-07 22:58:41.559119] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:fdfdfdfd cdw11:fdfd0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.495 [2024-06-07 22:58:41.559138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.495 [2024-06-07 22:58:41.559204] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:fdfdfdfd cdw11:fdfd0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.495 [2024-06-07 22:58:41.559221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:49.495 [2024-06-07 22:58:41.559286] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:fdfdf9fd cdw11:7afd0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.495 [2024-06-07 22:58:41.559303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:49.495 #38 NEW cov: 12096 ft: 15021 corp: 21/587b lim: 35 exec/s: 38 rss: 72Mb L: 34/35 MS: 1 ChangeBit- 00:07:49.495 [2024-06-07 22:58:41.608735] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:fdfdfdfd cdw11:fdfd0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.495 [2024-06-07 22:58:41.608767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.495 [2024-06-07 22:58:41.608833] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:fdfdfdfd cdw11:fdfd0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.495 [2024-06-07 22:58:41.608851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.495 #39 NEW cov: 12096 ft: 15039 corp: 22/605b lim: 35 exec/s: 39 rss: 72Mb L: 18/35 MS: 1 EraseBytes- 00:07:49.495 [2024-06-07 22:58:41.659342] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:fdfdfdfd cdw11:fdfd0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.495 [2024-06-07 22:58:41.659374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.495 [2024-06-07 22:58:41.659441] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:d4fdfdfd cdw11:fdfd0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.495 [2024-06-07 22:58:41.659459] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.495 [2024-06-07 22:58:41.659526] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:fdfdfdfd cdw11:fdfd0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.495 [2024-06-07 22:58:41.659543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:49.495 [2024-06-07 22:58:41.659613] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:fdfdf9fd cdw11:7afd0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.495 [2024-06-07 22:58:41.659631] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:49.495 #40 NEW cov: 12096 ft: 15043 corp: 23/639b lim: 35 exec/s: 40 rss: 72Mb L: 34/35 MS: 1 ChangeByte- 00:07:49.495 [2024-06-07 22:58:41.729526] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:fdfdfdfd cdw11:fdfd0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.495 [2024-06-07 22:58:41.729559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.495 [2024-06-07 22:58:41.729631] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:fdfdfdfd cdw11:fdfd0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.495 [2024-06-07 22:58:41.729650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.495 [2024-06-07 22:58:41.729714] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:fdfdfdfd cdw11:fd010000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.495 [2024-06-07 22:58:41.729731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:49.495 [2024-06-07 22:58:41.729795] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:fdfdf9fd cdw11:7afd0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.495 [2024-06-07 22:58:41.729813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:49.495 #41 NEW cov: 12096 ft: 15051 corp: 24/673b lim: 35 exec/s: 41 rss: 72Mb L: 34/35 MS: 1 PersAutoDict- DE: "\001\000"- 00:07:49.755 [2024-06-07 22:58:41.779918] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:fdfdfdfd cdw11:fdfd0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.755 [2024-06-07 22:58:41.779952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.755 [2024-06-07 22:58:41.780019] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:fdfdfdfd cdw11:fdfd0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.755 [2024-06-07 22:58:41.780037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.755 [2024-06-07 22:58:41.780100] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:fdfd00fd cdw11:fdfd0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.755 [2024-06-07 22:58:41.780118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:49.755 [2024-06-07 22:58:41.780181] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:fdfdfdfd cdw11:fdfd0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.755 [2024-06-07 22:58:41.780199] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:49.755 [2024-06-07 22:58:41.780263] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:fdfdfdfd cdw11:fdfd0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.755 [2024-06-07 22:58:41.780281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:49.755 #42 NEW cov: 12096 ft: 15086 corp: 25/708b lim: 35 exec/s: 42 rss: 72Mb L: 35/35 MS: 1 ShuffleBytes- 00:07:49.755 [2024-06-07 22:58:41.829998] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:fdfdfdfd cdw11:fdfd0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.755 [2024-06-07 22:58:41.830032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.755 [2024-06-07 22:58:41.830103] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:fdfdfdfd cdw11:fdfd0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.755 [2024-06-07 22:58:41.830121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.755 [2024-06-07 22:58:41.830193] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:fdfd00fd cdw11:fdfd0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.755 [2024-06-07 22:58:41.830211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:49.755 [2024-06-07 22:58:41.830278] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:fdfdfdfd cdw11:fdfd0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.755 [2024-06-07 22:58:41.830297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:49.755 [2024-06-07 22:58:41.830360] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:fdfdfdfd cdw11:fdfd0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.755 [2024-06-07 22:58:41.830379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:49.755 #43 NEW cov: 12096 ft: 15094 corp: 26/743b lim: 35 exec/s: 43 rss: 72Mb L: 35/35 MS: 1 ShuffleBytes- 00:07:49.755 [2024-06-07 22:58:41.899985] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:fdfdfdfd cdw11:fdfd0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.755 [2024-06-07 22:58:41.900017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.755 [2024-06-07 22:58:41.900087] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:fdfdfdfd cdw11:fdfd0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.755 [2024-06-07 22:58:41.900106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.755 [2024-06-07 22:58:41.900172] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00fdfd01 cdw11:fdfd0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.755 [2024-06-07 22:58:41.900190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:49.755 [2024-06-07 22:58:41.900255] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:fdfdfdfd cdw11:fdfd0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.755 [2024-06-07 22:58:41.900272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:49.755 #44 NEW cov: 12096 ft: 15113 corp: 27/776b lim: 35 exec/s: 44 rss: 72Mb L: 33/35 MS: 1 ShuffleBytes- 00:07:49.755 [2024-06-07 22:58:41.970380] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.755 [2024-06-07 22:58:41.970413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.755 [2024-06-07 22:58:41.970482] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00010000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.755 [2024-06-07 22:58:41.970500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.755 [2024-06-07 22:58:41.970565] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.755 [2024-06-07 22:58:41.970588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:49.755 [2024-06-07 22:58:41.970652] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.755 [2024-06-07 22:58:41.970669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:49.755 [2024-06-07 22:58:41.970738] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.755 [2024-06-07 22:58:41.970756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:49.755 #45 NEW cov: 12096 ft: 15122 corp: 28/811b lim: 35 exec/s: 45 rss: 72Mb L: 35/35 MS: 1 ChangeBit- 00:07:49.755 [2024-06-07 22:58:42.019946] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:fdfdfdfd cdw11:fdfd0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.755 [2024-06-07 22:58:42.019979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.755 [2024-06-07 22:58:42.020047] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:fdfdfdfd cdw11:fd7a0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.755 [2024-06-07 22:58:42.020066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.014 #46 NEW cov: 12096 ft: 15137 corp: 29/830b lim: 35 exec/s: 23 rss: 72Mb L: 19/35 MS: 1 InsertByte- 00:07:50.014 #46 DONE cov: 12096 ft: 15137 corp: 29/830b lim: 35 exec/s: 23 rss: 72Mb 00:07:50.014 ###### Recommended dictionary. ###### 00:07:50.014 "\001\000" # Uses: 2 00:07:50.014 ###### End of recommended dictionary. ###### 00:07:50.014 Done 46 runs in 2 second(s) 00:07:50.014 22:58:42 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_4.conf /var/tmp/suppress_nvmf_fuzz 00:07:50.014 22:58:42 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:50.014 22:58:42 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:50.014 22:58:42 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:07:50.014 22:58:42 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=5 00:07:50.014 22:58:42 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:50.014 22:58:42 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:50.014 22:58:42 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:07:50.014 22:58:42 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_5.conf 00:07:50.014 22:58:42 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:50.014 22:58:42 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:50.014 22:58:42 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 5 00:07:50.014 22:58:42 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4405 00:07:50.014 22:58:42 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:07:50.014 22:58:42 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' 00:07:50.014 22:58:42 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4405"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:50.014 22:58:42 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:50.014 22:58:42 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:50.015 22:58:42 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' -c /tmp/fuzz_json_5.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 -Z 5 00:07:50.015 [2024-06-07 22:58:42.254313] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:07:50.015 [2024-06-07 22:58:42.254385] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4151365 ] 00:07:50.273 EAL: No free 2048 kB hugepages reported on node 1 00:07:50.273 [2024-06-07 22:58:42.548007] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.532 [2024-06-07 22:58:42.654298] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.532 [2024-06-07 22:58:42.716608] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:50.532 [2024-06-07 22:58:42.732988] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4405 *** 00:07:50.532 INFO: Running with entropic power schedule (0xFF, 100). 00:07:50.532 INFO: Seed: 3003446712 00:07:50.532 INFO: Loaded 1 modules (357552 inline 8-bit counters): 357552 [0x29a2fcc, 0x29fa47c), 00:07:50.532 INFO: Loaded 1 PC tables (357552 PCs): 357552 [0x29fa480,0x2f6ef80), 00:07:50.532 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:07:50.532 INFO: A corpus is not provided, starting from an empty corpus 00:07:50.532 #2 INITED exec/s: 0 rss: 63Mb 00:07:50.532 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:50.532 This may also happen if the target rejected all inputs we tried so far 00:07:50.532 [2024-06-07 22:58:42.788409] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:73cc6831 cdw11:493e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.532 [2024-06-07 22:58:42.788445] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.050 NEW_FUNC[1/684]: 0x48b180 in fuzz_admin_create_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:142 00:07:51.050 NEW_FUNC[2/684]: 0x4bf580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:51.050 #6 NEW cov: 11851 ft: 11849 corp: 2/10b lim: 45 exec/s: 0 rss: 70Mb L: 9/9 MS: 4 CopyPart-ChangeBinInt-CrossOver-CMP- DE: "h1s\314I>\016\000"- 00:07:51.050 [2024-06-07 22:58:43.239572] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:68310a0a cdw11:73cc0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.050 [2024-06-07 22:58:43.239619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.050 NEW_FUNC[1/2]: 0xff5300 in posix_sock_flush /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/module/sock/posix/posix.c:1447 00:07:51.050 NEW_FUNC[2/2]: 0x1aa1760 in spdk_sock_flush /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/sock/sock.c:522 00:07:51.050 #8 NEW cov: 11993 ft: 12591 corp: 3/20b lim: 45 exec/s: 0 rss: 71Mb L: 10/10 MS: 2 CopyPart-PersAutoDict- DE: "h1s\314I>\016\000"- 00:07:51.050 [2024-06-07 22:58:43.289608] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:73cc6831 cdw11:493e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.050 [2024-06-07 22:58:43.289641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.309 #9 NEW cov: 11999 ft: 12767 corp: 4/29b lim: 45 exec/s: 0 rss: 71Mb L: 9/10 MS: 1 ChangeBinInt- 00:07:51.309 [2024-06-07 22:58:43.349829] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:73ec6831 cdw11:493e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.309 [2024-06-07 22:58:43.349863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.309 #10 NEW cov: 12084 ft: 13103 corp: 5/38b lim: 45 exec/s: 0 rss: 71Mb L: 9/10 MS: 1 ChangeBit- 00:07:51.309 [2024-06-07 22:58:43.420065] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:73ec6831 cdw11:fd3e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.309 [2024-06-07 22:58:43.420098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.309 #11 NEW cov: 12084 ft: 13203 corp: 6/47b lim: 45 exec/s: 0 rss: 71Mb L: 9/10 MS: 1 CopyPart- 00:07:51.309 [2024-06-07 22:58:43.480170] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ec493173 cdw11:3e3e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.309 [2024-06-07 22:58:43.480203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.309 #12 NEW cov: 12084 ft: 13350 corp: 7/56b lim: 45 exec/s: 0 rss: 71Mb L: 9/10 MS: 1 CopyPart- 00:07:51.309 [2024-06-07 22:58:43.530379] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:73cc6831 cdw11:493e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.309 [2024-06-07 22:58:43.530411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.309 #13 NEW cov: 12084 ft: 13458 corp: 8/73b lim: 45 exec/s: 0 rss: 71Mb L: 17/17 MS: 1 PersAutoDict- DE: "h1s\314I>\016\000"- 00:07:51.309 [2024-06-07 22:58:43.580417] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.309 [2024-06-07 22:58:43.580450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.569 #17 NEW cov: 12084 ft: 13560 corp: 9/82b lim: 45 exec/s: 0 rss: 71Mb L: 9/17 MS: 4 ShuffleBytes-CopyPart-EraseBytes-CMP- DE: "\377\377\377\377\377\377\377\000"- 00:07:51.569 [2024-06-07 22:58:43.620614] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:73cc6831 cdw11:493e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.569 [2024-06-07 22:58:43.620646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.569 NEW_FUNC[1/1]: 0x1a74020 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:601 00:07:51.569 #18 NEW cov: 12107 ft: 13588 corp: 10/99b lim: 45 exec/s: 0 rss: 72Mb L: 17/17 MS: 1 ChangeByte- 00:07:51.569 [2024-06-07 22:58:43.690787] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:31736831 cdw11:00fd0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.569 [2024-06-07 22:58:43.690819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.569 #20 NEW cov: 12107 ft: 13637 corp: 11/108b lim: 45 exec/s: 0 rss: 72Mb L: 9/17 MS: 2 EraseBytes-CopyPart- 00:07:51.569 [2024-06-07 22:58:43.750945] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:68310af3 cdw11:73cc0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.569 [2024-06-07 22:58:43.750978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.569 #21 NEW cov: 12107 ft: 13718 corp: 12/118b lim: 45 exec/s: 21 rss: 72Mb L: 10/17 MS: 1 ChangeByte- 00:07:51.569 [2024-06-07 22:58:43.821126] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:73ec6831 cdw11:093e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.569 [2024-06-07 22:58:43.821159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.829 #22 NEW cov: 12107 ft: 13774 corp: 13/127b lim: 45 exec/s: 22 rss: 72Mb L: 9/17 MS: 1 ChangeBit- 00:07:51.829 [2024-06-07 22:58:43.861274] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ec493173 cdw11:3e3e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.829 [2024-06-07 22:58:43.861306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.829 #23 NEW cov: 12107 ft: 13806 corp: 14/136b lim: 45 exec/s: 23 rss: 72Mb L: 9/17 MS: 1 ChangeBinInt- 00:07:51.829 [2024-06-07 22:58:43.931459] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:73cc6831 cdw11:493e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.829 [2024-06-07 22:58:43.931491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.829 #24 NEW cov: 12107 ft: 13835 corp: 15/153b lim: 45 exec/s: 24 rss: 72Mb L: 17/17 MS: 1 ChangeBit- 00:07:51.829 [2024-06-07 22:58:43.981970] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:b7b76831 cdw11:b7b70005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.829 [2024-06-07 22:58:43.982006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.829 [2024-06-07 22:58:43.982070] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:b7b7b7b7 cdw11:b7b70005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.829 [2024-06-07 22:58:43.982089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.829 [2024-06-07 22:58:43.982152] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:b7b7b7b7 cdw11:b7b70005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.829 [2024-06-07 22:58:43.982169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.829 #29 NEW cov: 12107 ft: 14628 corp: 16/182b lim: 45 exec/s: 29 rss: 72Mb L: 29/29 MS: 5 EraseBytes-ChangeBinInt-EraseBytes-ShuffleBytes-InsertRepeatedBytes- 00:07:51.829 [2024-06-07 22:58:44.051785] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:68310a0a cdw11:73cc0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.829 [2024-06-07 22:58:44.051818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.829 #30 NEW cov: 12107 ft: 14676 corp: 17/192b lim: 45 exec/s: 30 rss: 72Mb L: 10/29 MS: 1 ChangeByte- 00:07:51.829 [2024-06-07 22:58:44.102504] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:68310a0a cdw11:73ff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.829 [2024-06-07 22:58:44.102536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.829 [2024-06-07 22:58:44.102605] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.829 [2024-06-07 22:58:44.102624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.829 [2024-06-07 22:58:44.102687] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.829 [2024-06-07 22:58:44.102705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.829 [2024-06-07 22:58:44.102763] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.829 [2024-06-07 22:58:44.102781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:52.088 #31 NEW cov: 12107 ft: 14999 corp: 18/234b lim: 45 exec/s: 31 rss: 72Mb L: 42/42 MS: 1 InsertRepeatedBytes- 00:07:52.088 [2024-06-07 22:58:44.172121] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:73cc6830 cdw11:493e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.088 [2024-06-07 22:58:44.172153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.088 #32 NEW cov: 12107 ft: 15057 corp: 19/243b lim: 45 exec/s: 32 rss: 72Mb L: 9/42 MS: 1 ChangeASCIIInt- 00:07:52.088 [2024-06-07 22:58:44.222232] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ec7368fd cdw11:313e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.088 [2024-06-07 22:58:44.222264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.088 #33 NEW cov: 12107 ft: 15078 corp: 20/252b lim: 45 exec/s: 33 rss: 72Mb L: 9/42 MS: 1 ShuffleBytes- 00:07:52.088 [2024-06-07 22:58:44.272907] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:68310a0a cdw11:73ff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.088 [2024-06-07 22:58:44.272943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.088 [2024-06-07 22:58:44.273010] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.088 [2024-06-07 22:58:44.273027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.088 [2024-06-07 22:58:44.273088] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.088 [2024-06-07 22:58:44.273106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:52.088 [2024-06-07 22:58:44.273169] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:cc410001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.088 [2024-06-07 22:58:44.273186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:52.088 #34 NEW cov: 12107 ft: 15143 corp: 21/288b lim: 45 exec/s: 34 rss: 72Mb L: 36/42 MS: 1 EraseBytes- 00:07:52.088 [2024-06-07 22:58:44.342924] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:eded0aed cdw11:eded0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.088 [2024-06-07 22:58:44.342958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.088 [2024-06-07 22:58:44.343024] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:edededed cdw11:eded0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.088 [2024-06-07 22:58:44.343043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.088 [2024-06-07 22:58:44.343106] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ed0aeded cdw11:68310003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.088 [2024-06-07 22:58:44.343124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:52.348 #35 NEW cov: 12107 ft: 15202 corp: 22/318b lim: 45 exec/s: 35 rss: 72Mb L: 30/42 MS: 1 InsertRepeatedBytes- 00:07:52.348 [2024-06-07 22:58:44.392918] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:31730a68 cdw11:cc490001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.348 [2024-06-07 22:58:44.392950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.348 [2024-06-07 22:58:44.393012] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:3173f368 cdw11:cc490001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.348 [2024-06-07 22:58:44.393030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.348 #36 NEW cov: 12107 ft: 15415 corp: 23/336b lim: 45 exec/s: 36 rss: 72Mb L: 18/42 MS: 1 PersAutoDict- DE: "h1s\314I>\016\000"- 00:07:52.348 [2024-06-07 22:58:44.463128] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:31730a68 cdw11:cc490001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.348 [2024-06-07 22:58:44.463159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.348 [2024-06-07 22:58:44.463221] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:3173f36c cdw11:cc490001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.348 [2024-06-07 22:58:44.463239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.348 #37 NEW cov: 12107 ft: 15478 corp: 24/354b lim: 45 exec/s: 37 rss: 72Mb L: 18/42 MS: 1 ChangeBit- 00:07:52.348 [2024-06-07 22:58:44.533068] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:73cc6830 cdw11:493e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.348 [2024-06-07 22:58:44.533104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.348 #38 NEW cov: 12107 ft: 15532 corp: 25/364b lim: 45 exec/s: 38 rss: 73Mb L: 10/42 MS: 1 InsertByte- 00:07:52.348 [2024-06-07 22:58:44.593281] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:0e3e31cc cdw11:49730003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.348 [2024-06-07 22:58:44.593313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.348 #39 NEW cov: 12107 ft: 15563 corp: 26/373b lim: 45 exec/s: 39 rss: 73Mb L: 9/42 MS: 1 ShuffleBytes- 00:07:52.608 [2024-06-07 22:58:44.643800] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:73cc6830 cdw11:49550002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.608 [2024-06-07 22:58:44.643832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.608 [2024-06-07 22:58:44.643899] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:55555555 cdw11:55550002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.608 [2024-06-07 22:58:44.643917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.608 [2024-06-07 22:58:44.643979] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:55555555 cdw11:55550002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.608 [2024-06-07 22:58:44.643996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:52.608 #40 NEW cov: 12107 ft: 15597 corp: 27/407b lim: 45 exec/s: 40 rss: 73Mb L: 34/42 MS: 1 InsertRepeatedBytes- 00:07:52.608 [2024-06-07 22:58:44.693712] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:7b7b7b7b cdw11:7b7b0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.608 [2024-06-07 22:58:44.693743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.608 [2024-06-07 22:58:44.693806] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:7b7b7b7b cdw11:7b7b0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.608 [2024-06-07 22:58:44.693824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.608 #44 NEW cov: 12107 ft: 15603 corp: 28/433b lim: 45 exec/s: 44 rss: 73Mb L: 26/42 MS: 4 EraseBytes-CopyPart-ChangeBit-InsertRepeatedBytes- 00:07:52.608 [2024-06-07 22:58:44.743887] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:68310af3 cdw11:73ec0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.608 [2024-06-07 22:58:44.743918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.608 [2024-06-07 22:58:44.743978] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00fd310e cdw11:73cc0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.608 [2024-06-07 22:58:44.743997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.608 #45 NEW cov: 12107 ft: 15618 corp: 29/452b lim: 45 exec/s: 22 rss: 73Mb L: 19/42 MS: 1 CrossOver- 00:07:52.608 #45 DONE cov: 12107 ft: 15618 corp: 29/452b lim: 45 exec/s: 22 rss: 73Mb 00:07:52.608 ###### Recommended dictionary. ###### 00:07:52.608 "h1s\314I>\016\000" # Uses: 3 00:07:52.608 "\377\377\377\377\377\377\377\000" # Uses: 0 00:07:52.608 ###### End of recommended dictionary. ###### 00:07:52.608 Done 45 runs in 2 second(s) 00:07:52.867 22:58:44 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_5.conf /var/tmp/suppress_nvmf_fuzz 00:07:52.867 22:58:44 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:52.867 22:58:44 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:52.867 22:58:44 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:07:52.867 22:58:44 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=6 00:07:52.867 22:58:44 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:52.867 22:58:44 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:52.867 22:58:44 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:07:52.867 22:58:44 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_6.conf 00:07:52.867 22:58:44 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:52.867 22:58:44 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:52.867 22:58:44 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 6 00:07:52.867 22:58:44 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4406 00:07:52.867 22:58:44 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:07:52.867 22:58:44 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' 00:07:52.867 22:58:44 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4406"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:52.867 22:58:44 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:52.868 22:58:44 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:52.868 22:58:44 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' -c /tmp/fuzz_json_6.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 -Z 6 00:07:52.868 [2024-06-07 22:58:44.957374] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:07:52.868 [2024-06-07 22:58:44.957446] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4151712 ] 00:07:52.868 EAL: No free 2048 kB hugepages reported on node 1 00:07:53.127 [2024-06-07 22:58:45.251285] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.127 [2024-06-07 22:58:45.342784] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.385 [2024-06-07 22:58:45.405130] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:53.386 [2024-06-07 22:58:45.421510] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4406 *** 00:07:53.386 INFO: Running with entropic power schedule (0xFF, 100). 00:07:53.386 INFO: Seed: 1395482106 00:07:53.386 INFO: Loaded 1 modules (357552 inline 8-bit counters): 357552 [0x29a2fcc, 0x29fa47c), 00:07:53.386 INFO: Loaded 1 PC tables (357552 PCs): 357552 [0x29fa480,0x2f6ef80), 00:07:53.386 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:07:53.386 INFO: A corpus is not provided, starting from an empty corpus 00:07:53.386 #2 INITED exec/s: 0 rss: 63Mb 00:07:53.386 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:53.386 This may also happen if the target rejected all inputs we tried so far 00:07:53.386 [2024-06-07 22:58:45.476955] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:53.386 [2024-06-07 22:58:45.476990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.644 NEW_FUNC[1/684]: 0x48d990 in fuzz_admin_delete_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:161 00:07:53.644 NEW_FUNC[2/684]: 0x4bf580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:53.644 #4 NEW cov: 11779 ft: 11781 corp: 2/3b lim: 10 exec/s: 0 rss: 70Mb L: 2/2 MS: 2 CopyPart-CrossOver- 00:07:53.904 [2024-06-07 22:58:45.928133] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:53.904 [2024-06-07 22:58:45.928175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.904 #5 NEW cov: 11910 ft: 12438 corp: 3/6b lim: 10 exec/s: 0 rss: 71Mb L: 3/3 MS: 1 CrossOver- 00:07:53.904 [2024-06-07 22:58:45.998202] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:53.904 [2024-06-07 22:58:45.998236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.904 #6 NEW cov: 11916 ft: 12782 corp: 4/9b lim: 10 exec/s: 0 rss: 71Mb L: 3/3 MS: 1 ShuffleBytes- 00:07:53.904 [2024-06-07 22:58:46.058365] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002b0a cdw11:00000000 00:07:53.904 [2024-06-07 22:58:46.058398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.904 #12 NEW cov: 12001 ft: 12990 corp: 5/11b lim: 10 exec/s: 0 rss: 71Mb L: 2/3 MS: 1 ChangeByte- 00:07:53.904 [2024-06-07 22:58:46.108796] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00009595 cdw11:00000000 00:07:53.904 [2024-06-07 22:58:46.108828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.904 [2024-06-07 22:58:46.108895] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00009595 cdw11:00000000 00:07:53.904 [2024-06-07 22:58:46.108914] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.904 [2024-06-07 22:58:46.108976] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000950a cdw11:00000000 00:07:53.904 [2024-06-07 22:58:46.108994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:53.904 #13 NEW cov: 12001 ft: 13291 corp: 6/17b lim: 10 exec/s: 0 rss: 71Mb L: 6/6 MS: 1 InsertRepeatedBytes- 00:07:53.904 [2024-06-07 22:58:46.158954] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00009595 cdw11:00000000 00:07:53.904 [2024-06-07 22:58:46.158986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.904 [2024-06-07 22:58:46.159053] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00009595 cdw11:00000000 00:07:53.904 [2024-06-07 22:58:46.159071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.904 [2024-06-07 22:58:46.159135] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000950a cdw11:00000000 00:07:53.904 [2024-06-07 22:58:46.159153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.164 #14 NEW cov: 12001 ft: 13378 corp: 7/24b lim: 10 exec/s: 0 rss: 71Mb L: 7/7 MS: 1 InsertByte- 00:07:54.164 [2024-06-07 22:58:46.229188] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00009515 cdw11:00000000 00:07:54.164 [2024-06-07 22:58:46.229221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.164 [2024-06-07 22:58:46.229284] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00009595 cdw11:00000000 00:07:54.164 [2024-06-07 22:58:46.229302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.164 [2024-06-07 22:58:46.229363] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000950a cdw11:00000000 00:07:54.164 [2024-06-07 22:58:46.229381] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.164 #15 NEW cov: 12001 ft: 13433 corp: 8/30b lim: 10 exec/s: 0 rss: 71Mb L: 6/7 MS: 1 ChangeBit- 00:07:54.164 [2024-06-07 22:58:46.279387] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00009595 cdw11:00000000 00:07:54.164 [2024-06-07 22:58:46.279420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.164 [2024-06-07 22:58:46.279484] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00009595 cdw11:00000000 00:07:54.164 [2024-06-07 22:58:46.279502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.164 [2024-06-07 22:58:46.279565] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00004f4f cdw11:00000000 00:07:54.164 [2024-06-07 22:58:46.279590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.164 [2024-06-07 22:58:46.279653] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00004f95 cdw11:00000000 00:07:54.164 [2024-06-07 22:58:46.279670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:54.164 #16 NEW cov: 12001 ft: 13656 corp: 9/39b lim: 10 exec/s: 0 rss: 71Mb L: 9/9 MS: 1 InsertRepeatedBytes- 00:07:54.164 [2024-06-07 22:58:46.329315] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:54.164 [2024-06-07 22:58:46.329349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.164 [2024-06-07 22:58:46.329413] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00002b0a cdw11:00000000 00:07:54.164 [2024-06-07 22:58:46.329431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.164 NEW_FUNC[1/1]: 0x1a74020 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:601 00:07:54.164 #17 NEW cov: 12024 ft: 13833 corp: 10/44b lim: 10 exec/s: 0 rss: 71Mb L: 5/9 MS: 1 CrossOver- 00:07:54.164 [2024-06-07 22:58:46.379316] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000f50a cdw11:00000000 00:07:54.164 [2024-06-07 22:58:46.379349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.164 #18 NEW cov: 12024 ft: 13886 corp: 11/46b lim: 10 exec/s: 0 rss: 71Mb L: 2/9 MS: 1 ChangeBinInt- 00:07:54.164 [2024-06-07 22:58:46.419688] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00009541 cdw11:00000000 00:07:54.164 [2024-06-07 22:58:46.419721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.164 [2024-06-07 22:58:46.419786] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00009595 cdw11:00000000 00:07:54.164 [2024-06-07 22:58:46.419805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.164 [2024-06-07 22:58:46.419865] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000950a cdw11:00000000 00:07:54.164 [2024-06-07 22:58:46.419883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.434 #19 NEW cov: 12024 ft: 13968 corp: 12/53b lim: 10 exec/s: 19 rss: 72Mb L: 7/9 MS: 1 CopyPart- 00:07:54.434 [2024-06-07 22:58:46.489625] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000200 cdw11:00000000 00:07:54.434 [2024-06-07 22:58:46.489658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.434 #20 NEW cov: 12024 ft: 13980 corp: 13/55b lim: 10 exec/s: 20 rss: 72Mb L: 2/9 MS: 1 ChangeBinInt- 00:07:54.434 [2024-06-07 22:58:46.529928] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:54.434 [2024-06-07 22:58:46.529961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.434 [2024-06-07 22:58:46.530025] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00002b95 cdw11:00000000 00:07:54.434 [2024-06-07 22:58:46.530043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.434 #21 NEW cov: 12024 ft: 14018 corp: 14/60b lim: 10 exec/s: 21 rss: 72Mb L: 5/9 MS: 1 CrossOver- 00:07:54.434 [2024-06-07 22:58:46.600145] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:54.434 [2024-06-07 22:58:46.600178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.434 [2024-06-07 22:58:46.600242] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000b95 cdw11:00000000 00:07:54.434 [2024-06-07 22:58:46.600261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.434 #22 NEW cov: 12024 ft: 14038 corp: 15/65b lim: 10 exec/s: 22 rss: 72Mb L: 5/9 MS: 1 ChangeBit- 00:07:54.435 [2024-06-07 22:58:46.670475] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00009595 cdw11:00000000 00:07:54.435 [2024-06-07 22:58:46.670509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.435 [2024-06-07 22:58:46.670574] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00009595 cdw11:00000000 00:07:54.435 [2024-06-07 22:58:46.670599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.435 [2024-06-07 22:58:46.670664] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000930a cdw11:00000000 00:07:54.435 [2024-06-07 22:58:46.670681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.435 #23 NEW cov: 12024 ft: 14058 corp: 16/72b lim: 10 exec/s: 23 rss: 72Mb L: 7/9 MS: 1 ChangeBinInt- 00:07:54.700 [2024-06-07 22:58:46.720300] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a95 cdw11:00000000 00:07:54.700 [2024-06-07 22:58:46.720332] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.700 #24 NEW cov: 12024 ft: 14071 corp: 17/75b lim: 10 exec/s: 24 rss: 72Mb L: 3/9 MS: 1 EraseBytes- 00:07:54.700 [2024-06-07 22:58:46.760415] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000273a cdw11:00000000 00:07:54.700 [2024-06-07 22:58:46.760447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.700 #28 NEW cov: 12024 ft: 14078 corp: 18/77b lim: 10 exec/s: 28 rss: 72Mb L: 2/9 MS: 4 ShuffleBytes-ChangeByte-ChangeByte-InsertByte- 00:07:54.700 [2024-06-07 22:58:46.800489] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:54.700 [2024-06-07 22:58:46.800522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.700 #29 NEW cov: 12024 ft: 14122 corp: 19/79b lim: 10 exec/s: 29 rss: 72Mb L: 2/9 MS: 1 CopyPart- 00:07:54.700 [2024-06-07 22:58:46.840914] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00009715 cdw11:00000000 00:07:54.700 [2024-06-07 22:58:46.840948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.700 [2024-06-07 22:58:46.841014] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00009595 cdw11:00000000 00:07:54.700 [2024-06-07 22:58:46.841033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.700 [2024-06-07 22:58:46.841093] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000950a cdw11:00000000 00:07:54.700 [2024-06-07 22:58:46.841111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.700 #30 NEW cov: 12024 ft: 14131 corp: 20/85b lim: 10 exec/s: 30 rss: 72Mb L: 6/9 MS: 1 ChangeBit- 00:07:54.700 [2024-06-07 22:58:46.911116] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00009541 cdw11:00000000 00:07:54.700 [2024-06-07 22:58:46.911149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.700 [2024-06-07 22:58:46.911214] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00009595 cdw11:00000000 00:07:54.700 [2024-06-07 22:58:46.911232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.700 [2024-06-07 22:58:46.911294] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000950a cdw11:00000000 00:07:54.700 [2024-06-07 22:58:46.911311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.700 #31 NEW cov: 12024 ft: 14164 corp: 21/92b lim: 10 exec/s: 31 rss: 72Mb L: 7/9 MS: 1 CopyPart- 00:07:54.958 [2024-06-07 22:58:46.981051] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002b0a cdw11:00000000 00:07:54.958 [2024-06-07 22:58:46.981084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.958 #32 NEW cov: 12024 ft: 14169 corp: 22/95b lim: 10 exec/s: 32 rss: 72Mb L: 3/9 MS: 1 CopyPart- 00:07:54.958 [2024-06-07 22:58:47.041352] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002b2c cdw11:00000000 00:07:54.959 [2024-06-07 22:58:47.041385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.959 [2024-06-07 22:58:47.041448] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:54.959 [2024-06-07 22:58:47.041466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.959 #33 NEW cov: 12024 ft: 14195 corp: 23/99b lim: 10 exec/s: 33 rss: 72Mb L: 4/9 MS: 1 InsertByte- 00:07:54.959 [2024-06-07 22:58:47.111805] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00009595 cdw11:00000000 00:07:54.959 [2024-06-07 22:58:47.111837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.959 [2024-06-07 22:58:47.111898] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00009595 cdw11:00000000 00:07:54.959 [2024-06-07 22:58:47.111916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.959 [2024-06-07 22:58:47.111978] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00009593 cdw11:00000000 00:07:54.959 [2024-06-07 22:58:47.111996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.959 [2024-06-07 22:58:47.112057] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000a41 cdw11:00000000 00:07:54.959 [2024-06-07 22:58:47.112075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:54.959 #34 NEW cov: 12024 ft: 14214 corp: 24/107b lim: 10 exec/s: 34 rss: 73Mb L: 8/9 MS: 1 CopyPart- 00:07:54.959 [2024-06-07 22:58:47.181567] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000950a cdw11:00000000 00:07:54.959 [2024-06-07 22:58:47.181605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.959 #35 NEW cov: 12024 ft: 14278 corp: 25/110b lim: 10 exec/s: 35 rss: 73Mb L: 3/9 MS: 1 CopyPart- 00:07:55.217 [2024-06-07 22:58:47.242191] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002b0a cdw11:00000000 00:07:55.217 [2024-06-07 22:58:47.242223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.217 [2024-06-07 22:58:47.242285] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000af2 cdw11:00000000 00:07:55.217 [2024-06-07 22:58:47.242302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.217 [2024-06-07 22:58:47.242366] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000f2f2 cdw11:00000000 00:07:55.217 [2024-06-07 22:58:47.242384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:55.218 [2024-06-07 22:58:47.242447] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000f2f2 cdw11:00000000 00:07:55.218 [2024-06-07 22:58:47.242464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:55.218 #36 NEW cov: 12024 ft: 14352 corp: 26/119b lim: 10 exec/s: 36 rss: 73Mb L: 9/9 MS: 1 InsertRepeatedBytes- 00:07:55.218 [2024-06-07 22:58:47.291891] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:55.218 [2024-06-07 22:58:47.291923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.218 #37 NEW cov: 12024 ft: 14360 corp: 27/121b lim: 10 exec/s: 37 rss: 73Mb L: 2/9 MS: 1 CopyPart- 00:07:55.218 [2024-06-07 22:58:47.352505] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:55.218 [2024-06-07 22:58:47.352537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.218 [2024-06-07 22:58:47.352608] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:55.218 [2024-06-07 22:58:47.352628] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.218 [2024-06-07 22:58:47.352693] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000a2b cdw11:00000000 00:07:55.218 [2024-06-07 22:58:47.352710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:55.218 [2024-06-07 22:58:47.352774] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:55.218 [2024-06-07 22:58:47.352792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:55.218 #38 NEW cov: 12024 ft: 14376 corp: 28/129b lim: 10 exec/s: 38 rss: 73Mb L: 8/9 MS: 1 CrossOver- 00:07:55.218 [2024-06-07 22:58:47.402160] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000950a cdw11:00000000 00:07:55.218 [2024-06-07 22:58:47.402192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.218 #39 NEW cov: 12024 ft: 14456 corp: 29/132b lim: 10 exec/s: 39 rss: 73Mb L: 3/9 MS: 1 ChangeByte- 00:07:55.218 [2024-06-07 22:58:47.462383] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000703a cdw11:00000000 00:07:55.218 [2024-06-07 22:58:47.462420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.478 #40 NEW cov: 12024 ft: 14466 corp: 30/134b lim: 10 exec/s: 20 rss: 73Mb L: 2/9 MS: 1 ChangeByte- 00:07:55.478 #40 DONE cov: 12024 ft: 14466 corp: 30/134b lim: 10 exec/s: 20 rss: 73Mb 00:07:55.478 Done 40 runs in 2 second(s) 00:07:55.478 22:58:47 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_6.conf /var/tmp/suppress_nvmf_fuzz 00:07:55.478 22:58:47 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:55.478 22:58:47 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:55.478 22:58:47 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 7 1 0x1 00:07:55.478 22:58:47 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=7 00:07:55.478 22:58:47 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:55.478 22:58:47 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:55.478 22:58:47 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:07:55.478 22:58:47 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_7.conf 00:07:55.478 22:58:47 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:55.478 22:58:47 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:55.478 22:58:47 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 7 00:07:55.478 22:58:47 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4407 00:07:55.478 22:58:47 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:07:55.478 22:58:47 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' 00:07:55.478 22:58:47 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4407"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:55.478 22:58:47 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:55.478 22:58:47 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:55.478 22:58:47 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' -c /tmp/fuzz_json_7.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 -Z 7 00:07:55.478 [2024-06-07 22:58:47.696581] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:07:55.478 [2024-06-07 22:58:47.696655] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4152231 ] 00:07:55.478 EAL: No free 2048 kB hugepages reported on node 1 00:07:55.737 [2024-06-07 22:58:47.989804] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.996 [2024-06-07 22:58:48.092207] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.996 [2024-06-07 22:58:48.154639] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:55.996 [2024-06-07 22:58:48.171018] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4407 *** 00:07:55.996 INFO: Running with entropic power schedule (0xFF, 100). 00:07:55.996 INFO: Seed: 4146481535 00:07:55.996 INFO: Loaded 1 modules (357552 inline 8-bit counters): 357552 [0x29a2fcc, 0x29fa47c), 00:07:55.996 INFO: Loaded 1 PC tables (357552 PCs): 357552 [0x29fa480,0x2f6ef80), 00:07:55.996 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:07:55.996 INFO: A corpus is not provided, starting from an empty corpus 00:07:55.996 #2 INITED exec/s: 0 rss: 63Mb 00:07:55.996 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:55.996 This may also happen if the target rejected all inputs we tried so far 00:07:55.996 [2024-06-07 22:58:48.249449] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:55.996 [2024-06-07 22:58:48.249496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.996 [2024-06-07 22:58:48.249582] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:55.996 [2024-06-07 22:58:48.249601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.996 [2024-06-07 22:58:48.249681] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:55.996 [2024-06-07 22:58:48.249701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:55.996 [2024-06-07 22:58:48.249780] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:55.996 [2024-06-07 22:58:48.249800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:55.996 [2024-06-07 22:58:48.249879] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000fffa cdw11:00000000 00:07:55.996 [2024-06-07 22:58:48.249899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:56.515 NEW_FUNC[1/684]: 0x48e380 in fuzz_admin_delete_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:172 00:07:56.515 NEW_FUNC[2/684]: 0x4bf580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:56.515 #4 NEW cov: 11780 ft: 11781 corp: 2/11b lim: 10 exec/s: 0 rss: 70Mb L: 10/10 MS: 2 ChangeBinInt-InsertRepeatedBytes- 00:07:56.515 [2024-06-07 22:58:48.609543] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:56.515 [2024-06-07 22:58:48.609596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.515 [2024-06-07 22:58:48.609730] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:56.515 [2024-06-07 22:58:48.609753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.515 [2024-06-07 22:58:48.609889] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:56.515 [2024-06-07 22:58:48.609913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:56.515 [2024-06-07 22:58:48.610060] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:56.515 [2024-06-07 22:58:48.610082] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:56.515 [2024-06-07 22:58:48.610226] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000ff0a cdw11:00000000 00:07:56.515 [2024-06-07 22:58:48.610249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:56.515 #5 NEW cov: 11910 ft: 12511 corp: 3/21b lim: 10 exec/s: 0 rss: 71Mb L: 10/10 MS: 1 InsertRepeatedBytes- 00:07:56.515 [2024-06-07 22:58:48.669751] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:56.515 [2024-06-07 22:58:48.669791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.515 [2024-06-07 22:58:48.669931] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:56.515 [2024-06-07 22:58:48.669956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.515 [2024-06-07 22:58:48.670094] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000f9ff cdw11:00000000 00:07:56.515 [2024-06-07 22:58:48.670118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:56.515 [2024-06-07 22:58:48.670255] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:56.515 [2024-06-07 22:58:48.670280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:56.515 [2024-06-07 22:58:48.670410] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000fffa cdw11:00000000 00:07:56.515 [2024-06-07 22:58:48.670433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:56.515 #6 NEW cov: 11916 ft: 12734 corp: 4/31b lim: 10 exec/s: 0 rss: 71Mb L: 10/10 MS: 1 ChangeBinInt- 00:07:56.515 [2024-06-07 22:58:48.749861] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:56.515 [2024-06-07 22:58:48.749897] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.515 [2024-06-07 22:58:48.750028] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:56.515 [2024-06-07 22:58:48.750051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.515 [2024-06-07 22:58:48.750174] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000f9ff cdw11:00000000 00:07:56.515 [2024-06-07 22:58:48.750197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:56.515 [2024-06-07 22:58:48.750331] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:56.515 [2024-06-07 22:58:48.750353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:56.515 [2024-06-07 22:58:48.750484] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000fff2 cdw11:00000000 00:07:56.515 [2024-06-07 22:58:48.750505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:56.774 #7 NEW cov: 12001 ft: 13069 corp: 5/41b lim: 10 exec/s: 0 rss: 71Mb L: 10/10 MS: 1 ChangeBit- 00:07:56.774 [2024-06-07 22:58:48.829624] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:56.774 [2024-06-07 22:58:48.829661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.774 [2024-06-07 22:58:48.829803] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:56.774 [2024-06-07 22:58:48.829827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.774 [2024-06-07 22:58:48.829966] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ff0a cdw11:00000000 00:07:56.774 [2024-06-07 22:58:48.829988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:56.774 #8 NEW cov: 12001 ft: 13510 corp: 6/47b lim: 10 exec/s: 0 rss: 71Mb L: 6/10 MS: 1 EraseBytes- 00:07:56.774 [2024-06-07 22:58:48.910177] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:56.774 [2024-06-07 22:58:48.910214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.774 [2024-06-07 22:58:48.910361] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:56.774 [2024-06-07 22:58:48.910385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.774 [2024-06-07 22:58:48.910531] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ff0a cdw11:00000000 00:07:56.774 [2024-06-07 22:58:48.910555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:56.774 [2024-06-07 22:58:48.910698] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:56.774 [2024-06-07 22:58:48.910722] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:56.774 #9 NEW cov: 12001 ft: 13624 corp: 7/56b lim: 10 exec/s: 0 rss: 71Mb L: 9/10 MS: 1 CopyPart- 00:07:56.774 [2024-06-07 22:58:48.990486] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ff89 cdw11:00000000 00:07:56.774 [2024-06-07 22:58:48.990522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.774 [2024-06-07 22:58:48.990664] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:56.774 [2024-06-07 22:58:48.990689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.774 [2024-06-07 22:58:48.990821] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ff0a cdw11:00000000 00:07:56.775 [2024-06-07 22:58:48.990844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:56.775 #10 NEW cov: 12001 ft: 13696 corp: 8/62b lim: 10 exec/s: 0 rss: 71Mb L: 6/10 MS: 1 ChangeByte- 00:07:56.775 [2024-06-07 22:58:49.051205] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:56.775 [2024-06-07 22:58:49.051240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.034 [2024-06-07 22:58:49.051375] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:57.034 [2024-06-07 22:58:49.051400] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:57.034 [2024-06-07 22:58:49.051530] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000fff6 cdw11:00000000 00:07:57.034 [2024-06-07 22:58:49.051554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:57.034 [2024-06-07 22:58:49.051691] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:57.034 [2024-06-07 22:58:49.051714] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:57.034 [2024-06-07 22:58:49.051850] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000ff0a cdw11:00000000 00:07:57.034 [2024-06-07 22:58:49.051872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:57.034 #11 NEW cov: 12001 ft: 13730 corp: 9/72b lim: 10 exec/s: 0 rss: 71Mb L: 10/10 MS: 1 ChangeByte- 00:07:57.034 [2024-06-07 22:58:49.111121] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:57.034 [2024-06-07 22:58:49.111158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.034 [2024-06-07 22:58:49.111292] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000fff9 cdw11:00000000 00:07:57.034 [2024-06-07 22:58:49.111320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:57.034 [2024-06-07 22:58:49.111452] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:57.034 [2024-06-07 22:58:49.111476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:57.034 [2024-06-07 22:58:49.111621] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:57.034 [2024-06-07 22:58:49.111644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:57.034 NEW_FUNC[1/1]: 0x1a74020 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:601 00:07:57.034 #12 NEW cov: 12024 ft: 13783 corp: 10/81b lim: 10 exec/s: 0 rss: 72Mb L: 9/10 MS: 1 EraseBytes- 00:07:57.034 [2024-06-07 22:58:49.171071] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:57.034 [2024-06-07 22:58:49.171109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.034 [2024-06-07 22:58:49.171247] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:57.034 [2024-06-07 22:58:49.171273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:57.034 [2024-06-07 22:58:49.171416] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:57.034 [2024-06-07 22:58:49.171441] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:57.034 #13 NEW cov: 12024 ft: 13848 corp: 11/88b lim: 10 exec/s: 13 rss: 72Mb L: 7/10 MS: 1 EraseBytes- 00:07:57.034 [2024-06-07 22:58:49.251371] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:57.034 [2024-06-07 22:58:49.251407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.034 [2024-06-07 22:58:49.251537] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:57.034 [2024-06-07 22:58:49.251560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:57.034 [2024-06-07 22:58:49.251711] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:57.034 [2024-06-07 22:58:49.251734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:57.034 #14 NEW cov: 12024 ft: 13945 corp: 12/95b lim: 10 exec/s: 14 rss: 72Mb L: 7/10 MS: 1 CopyPart- 00:07:57.294 [2024-06-07 22:58:49.331388] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:57.294 [2024-06-07 22:58:49.331425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.294 [2024-06-07 22:58:49.331565] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:57.294 [2024-06-07 22:58:49.331595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:57.294 #15 NEW cov: 12024 ft: 14129 corp: 13/100b lim: 10 exec/s: 15 rss: 72Mb L: 5/10 MS: 1 EraseBytes- 00:07:57.294 [2024-06-07 22:58:49.392094] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:57.294 [2024-06-07 22:58:49.392129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.294 [2024-06-07 22:58:49.392269] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ff0a cdw11:00000000 00:07:57.294 [2024-06-07 22:58:49.392291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:57.294 [2024-06-07 22:58:49.392421] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:57.294 [2024-06-07 22:58:49.392443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:57.294 [2024-06-07 22:58:49.392580] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:57.294 [2024-06-07 22:58:49.392602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:57.294 #16 NEW cov: 12024 ft: 14203 corp: 14/109b lim: 10 exec/s: 16 rss: 72Mb L: 9/10 MS: 1 ShuffleBytes- 00:07:57.294 [2024-06-07 22:58:49.451817] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:57.294 [2024-06-07 22:58:49.451861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.294 [2024-06-07 22:58:49.451995] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:57.294 [2024-06-07 22:58:49.452019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:57.294 #17 NEW cov: 12024 ft: 14234 corp: 15/114b lim: 10 exec/s: 17 rss: 72Mb L: 5/10 MS: 1 EraseBytes- 00:07:57.294 [2024-06-07 22:58:49.512452] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000fbff cdw11:00000000 00:07:57.294 [2024-06-07 22:58:49.512489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.294 [2024-06-07 22:58:49.512644] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ff0a cdw11:00000000 00:07:57.294 [2024-06-07 22:58:49.512666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:57.294 [2024-06-07 22:58:49.512816] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:57.294 [2024-06-07 22:58:49.512837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:57.294 [2024-06-07 22:58:49.512977] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:57.294 [2024-06-07 22:58:49.513000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:57.294 #18 NEW cov: 12024 ft: 14254 corp: 16/123b lim: 10 exec/s: 18 rss: 72Mb L: 9/10 MS: 1 ChangeBinInt- 00:07:57.553 [2024-06-07 22:58:49.592554] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ff89 cdw11:00000000 00:07:57.553 [2024-06-07 22:58:49.592600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.553 [2024-06-07 22:58:49.592744] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000fff7 cdw11:00000000 00:07:57.553 [2024-06-07 22:58:49.592768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:57.553 [2024-06-07 22:58:49.592907] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ff0a cdw11:00000000 00:07:57.553 [2024-06-07 22:58:49.592931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:57.553 #19 NEW cov: 12024 ft: 14261 corp: 17/129b lim: 10 exec/s: 19 rss: 72Mb L: 6/10 MS: 1 ChangeBit- 00:07:57.553 [2024-06-07 22:58:49.673443] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:57.553 [2024-06-07 22:58:49.673481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.553 [2024-06-07 22:58:49.673620] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:57.553 [2024-06-07 22:58:49.673642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:57.553 [2024-06-07 22:58:49.673785] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:57.553 [2024-06-07 22:58:49.673808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:57.553 [2024-06-07 22:58:49.673944] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000bfff cdw11:00000000 00:07:57.553 [2024-06-07 22:58:49.673967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:57.553 [2024-06-07 22:58:49.674106] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000fffa cdw11:00000000 00:07:57.553 [2024-06-07 22:58:49.674128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:57.553 #20 NEW cov: 12024 ft: 14337 corp: 18/139b lim: 10 exec/s: 20 rss: 72Mb L: 10/10 MS: 1 ChangeBit- 00:07:57.553 [2024-06-07 22:58:49.732547] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:57.553 [2024-06-07 22:58:49.732586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.553 #21 NEW cov: 12024 ft: 14548 corp: 19/142b lim: 10 exec/s: 21 rss: 72Mb L: 3/10 MS: 1 CrossOver- 00:07:57.553 [2024-06-07 22:58:49.813395] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:57.553 [2024-06-07 22:58:49.813435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.553 [2024-06-07 22:58:49.813571] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:57.553 [2024-06-07 22:58:49.813599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:57.553 [2024-06-07 22:58:49.813735] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ff0a cdw11:00000000 00:07:57.553 [2024-06-07 22:58:49.813756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:57.812 #22 NEW cov: 12024 ft: 14569 corp: 20/148b lim: 10 exec/s: 22 rss: 72Mb L: 6/10 MS: 1 ShuffleBytes- 00:07:57.812 [2024-06-07 22:58:49.874131] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:57.812 [2024-06-07 22:58:49.874168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.812 [2024-06-07 22:58:49.874299] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000f9ff cdw11:00000000 00:07:57.812 [2024-06-07 22:58:49.874323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:57.812 [2024-06-07 22:58:49.874450] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:57.812 [2024-06-07 22:58:49.874471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:57.812 [2024-06-07 22:58:49.874606] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000fff2 cdw11:00000000 00:07:57.812 [2024-06-07 22:58:49.874630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:57.812 [2024-06-07 22:58:49.874761] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000fff2 cdw11:00000000 00:07:57.812 [2024-06-07 22:58:49.874783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:57.812 #23 NEW cov: 12024 ft: 14579 corp: 21/158b lim: 10 exec/s: 23 rss: 72Mb L: 10/10 MS: 1 CopyPart- 00:07:57.812 [2024-06-07 22:58:49.953824] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:57.812 [2024-06-07 22:58:49.953862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.812 [2024-06-07 22:58:49.954003] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:57.812 [2024-06-07 22:58:49.954028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:57.812 [2024-06-07 22:58:49.954164] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:57.812 [2024-06-07 22:58:49.954186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:57.812 #24 NEW cov: 12024 ft: 14615 corp: 22/165b lim: 10 exec/s: 24 rss: 72Mb L: 7/10 MS: 1 EraseBytes- 00:07:57.812 [2024-06-07 22:58:50.033790] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000100 cdw11:00000000 00:07:57.812 [2024-06-07 22:58:50.033830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.812 [2024-06-07 22:58:50.033984] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:57.812 [2024-06-07 22:58:50.034012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:57.812 #28 NEW cov: 12024 ft: 14621 corp: 23/170b lim: 10 exec/s: 28 rss: 72Mb L: 5/10 MS: 4 ShuffleBytes-ChangeBit-ShuffleBytes-CMP- DE: "\001\000\000\000"- 00:07:58.072 [2024-06-07 22:58:50.094850] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000100 cdw11:00000000 00:07:58.072 [2024-06-07 22:58:50.094886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:58.072 [2024-06-07 22:58:50.095017] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:58.072 [2024-06-07 22:58:50.095038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:58.072 [2024-06-07 22:58:50.095171] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000eff cdw11:00000000 00:07:58.072 [2024-06-07 22:58:50.095192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:58.072 [2024-06-07 22:58:50.095327] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:58.072 [2024-06-07 22:58:50.095349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:58.072 [2024-06-07 22:58:50.095485] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:58.072 [2024-06-07 22:58:50.095506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:58.072 #29 NEW cov: 12024 ft: 14678 corp: 24/180b lim: 10 exec/s: 29 rss: 73Mb L: 10/10 MS: 1 InsertRepeatedBytes- 00:07:58.072 [2024-06-07 22:58:50.175005] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:58.072 [2024-06-07 22:58:50.175041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:58.072 [2024-06-07 22:58:50.175176] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:58.072 [2024-06-07 22:58:50.175199] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:58.072 [2024-06-07 22:58:50.175334] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffbf cdw11:00000000 00:07:58.072 [2024-06-07 22:58:50.175356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:58.072 [2024-06-07 22:58:50.175490] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:58.072 [2024-06-07 22:58:50.175514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:58.072 [2024-06-07 22:58:50.175646] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000fafa cdw11:00000000 00:07:58.072 [2024-06-07 22:58:50.175670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:58.072 #30 NEW cov: 12024 ft: 14698 corp: 25/190b lim: 10 exec/s: 30 rss: 73Mb L: 10/10 MS: 1 CopyPart- 00:07:58.072 [2024-06-07 22:58:50.235001] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000fffb cdw11:00000000 00:07:58.072 [2024-06-07 22:58:50.235038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:58.072 [2024-06-07 22:58:50.235183] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:58.072 [2024-06-07 22:58:50.235204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:58.072 [2024-06-07 22:58:50.235346] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:58.072 [2024-06-07 22:58:50.235368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:58.072 [2024-06-07 22:58:50.235501] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:58.072 [2024-06-07 22:58:50.235523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:58.072 #31 NEW cov: 12024 ft: 14706 corp: 26/198b lim: 10 exec/s: 15 rss: 73Mb L: 8/10 MS: 1 CrossOver- 00:07:58.072 #31 DONE cov: 12024 ft: 14706 corp: 26/198b lim: 10 exec/s: 15 rss: 73Mb 00:07:58.072 ###### Recommended dictionary. ###### 00:07:58.072 "\001\000\000\000" # Uses: 0 00:07:58.072 ###### End of recommended dictionary. ###### 00:07:58.072 Done 31 runs in 2 second(s) 00:07:58.332 22:58:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_7.conf /var/tmp/suppress_nvmf_fuzz 00:07:58.332 22:58:50 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:58.332 22:58:50 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:58.332 22:58:50 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 8 1 0x1 00:07:58.332 22:58:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=8 00:07:58.332 22:58:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:58.332 22:58:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:58.332 22:58:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:07:58.332 22:58:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_8.conf 00:07:58.332 22:58:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:58.332 22:58:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:58.332 22:58:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 8 00:07:58.332 22:58:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4408 00:07:58.332 22:58:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:07:58.332 22:58:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' 00:07:58.332 22:58:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4408"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:58.332 22:58:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:58.332 22:58:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:58.332 22:58:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' -c /tmp/fuzz_json_8.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 -Z 8 00:07:58.332 [2024-06-07 22:58:50.451229] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:07:58.332 [2024-06-07 22:58:50.451293] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4152766 ] 00:07:58.332 EAL: No free 2048 kB hugepages reported on node 1 00:07:58.591 [2024-06-07 22:58:50.759730] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.591 [2024-06-07 22:58:50.865141] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.850 [2024-06-07 22:58:50.927518] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:58.850 [2024-06-07 22:58:50.943895] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4408 *** 00:07:58.850 INFO: Running with entropic power schedule (0xFF, 100). 00:07:58.850 INFO: Seed: 2622510158 00:07:58.850 INFO: Loaded 1 modules (357552 inline 8-bit counters): 357552 [0x29a2fcc, 0x29fa47c), 00:07:58.850 INFO: Loaded 1 PC tables (357552 PCs): 357552 [0x29fa480,0x2f6ef80), 00:07:58.850 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:07:58.850 INFO: A corpus is not provided, starting from an empty corpus 00:07:58.850 [2024-06-07 22:58:51.014933] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.850 [2024-06-07 22:58:51.014979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:58.851 #2 INITED cov: 11808 ft: 11809 corp: 1/1b exec/s: 0 rss: 70Mb 00:07:58.851 [2024-06-07 22:58:51.075174] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.851 [2024-06-07 22:58:51.075210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:58.851 [2024-06-07 22:58:51.075297] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.851 [2024-06-07 22:58:51.075318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:58.851 #3 NEW cov: 11938 ft: 12980 corp: 2/3b lim: 5 exec/s: 0 rss: 71Mb L: 2/2 MS: 1 CopyPart- 00:07:59.110 [2024-06-07 22:58:51.155091] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.110 [2024-06-07 22:58:51.155126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:59.110 #4 NEW cov: 11944 ft: 13247 corp: 3/4b lim: 5 exec/s: 0 rss: 71Mb L: 1/2 MS: 1 ChangeByte- 00:07:59.110 [2024-06-07 22:58:51.215686] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.110 [2024-06-07 22:58:51.215721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:59.110 [2024-06-07 22:58:51.215807] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.110 [2024-06-07 22:58:51.215827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:59.110 #5 NEW cov: 12029 ft: 13441 corp: 4/6b lim: 5 exec/s: 0 rss: 71Mb L: 2/2 MS: 1 CopyPart- 00:07:59.110 [2024-06-07 22:58:51.275928] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.110 [2024-06-07 22:58:51.275963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:59.110 [2024-06-07 22:58:51.276049] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.110 [2024-06-07 22:58:51.276069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:59.110 #6 NEW cov: 12029 ft: 13602 corp: 5/8b lim: 5 exec/s: 0 rss: 71Mb L: 2/2 MS: 1 CrossOver- 00:07:59.110 [2024-06-07 22:58:51.335786] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.110 [2024-06-07 22:58:51.335821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:59.110 #7 NEW cov: 12029 ft: 13752 corp: 6/9b lim: 5 exec/s: 0 rss: 71Mb L: 1/2 MS: 1 ChangeBinInt- 00:07:59.369 [2024-06-07 22:58:51.396415] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.369 [2024-06-07 22:58:51.396452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:59.369 #8 NEW cov: 12029 ft: 13834 corp: 7/10b lim: 5 exec/s: 0 rss: 71Mb L: 1/2 MS: 1 ShuffleBytes- 00:07:59.369 [2024-06-07 22:58:51.458132] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.369 [2024-06-07 22:58:51.458167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:59.369 [2024-06-07 22:58:51.458254] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.369 [2024-06-07 22:58:51.458275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:59.369 [2024-06-07 22:58:51.458364] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.369 [2024-06-07 22:58:51.458385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:59.369 [2024-06-07 22:58:51.458472] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.369 [2024-06-07 22:58:51.458490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:59.369 [2024-06-07 22:58:51.458591] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.369 [2024-06-07 22:58:51.458615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:59.369 #9 NEW cov: 12029 ft: 14251 corp: 8/15b lim: 5 exec/s: 0 rss: 71Mb L: 5/5 MS: 1 CMP- DE: "\002\000\000\000"- 00:07:59.369 [2024-06-07 22:58:51.537327] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.370 [2024-06-07 22:58:51.537363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:59.370 [2024-06-07 22:58:51.537449] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.370 [2024-06-07 22:58:51.537469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:59.370 #10 NEW cov: 12029 ft: 14288 corp: 9/17b lim: 5 exec/s: 0 rss: 71Mb L: 2/5 MS: 1 CopyPart- 00:07:59.370 [2024-06-07 22:58:51.618780] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.370 [2024-06-07 22:58:51.618816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:59.370 [2024-06-07 22:58:51.618905] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.370 [2024-06-07 22:58:51.618927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:59.370 [2024-06-07 22:58:51.619010] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.370 [2024-06-07 22:58:51.619029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:59.370 [2024-06-07 22:58:51.619106] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.370 [2024-06-07 22:58:51.619126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:59.629 #11 NEW cov: 12029 ft: 14408 corp: 10/21b lim: 5 exec/s: 0 rss: 71Mb L: 4/5 MS: 1 InsertRepeatedBytes- 00:07:59.629 [2024-06-07 22:58:51.697641] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.629 [2024-06-07 22:58:51.697675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:59.629 #12 NEW cov: 12029 ft: 14423 corp: 11/22b lim: 5 exec/s: 0 rss: 71Mb L: 1/5 MS: 1 ChangeByte- 00:07:59.629 [2024-06-07 22:58:51.758145] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.629 [2024-06-07 22:58:51.758180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:59.629 [2024-06-07 22:58:51.758271] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.629 [2024-06-07 22:58:51.758292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:59.629 #13 NEW cov: 12029 ft: 14438 corp: 12/24b lim: 5 exec/s: 0 rss: 71Mb L: 2/5 MS: 1 CrossOver- 00:07:59.629 [2024-06-07 22:58:51.838096] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.629 [2024-06-07 22:58:51.838134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:59.887 NEW_FUNC[1/1]: 0x1a74020 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:601 00:07:59.887 #14 NEW cov: 12052 ft: 14472 corp: 13/25b lim: 5 exec/s: 14 rss: 73Mb L: 1/5 MS: 1 ChangeByte- 00:08:00.146 [2024-06-07 22:58:52.188567] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.146 [2024-06-07 22:58:52.188618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.146 #15 NEW cov: 12052 ft: 14722 corp: 14/26b lim: 5 exec/s: 15 rss: 73Mb L: 1/5 MS: 1 ChangeBit- 00:08:00.146 [2024-06-07 22:58:52.269034] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.146 [2024-06-07 22:58:52.269072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.146 [2024-06-07 22:58:52.269207] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.146 [2024-06-07 22:58:52.269231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:00.146 #16 NEW cov: 12052 ft: 14820 corp: 15/28b lim: 5 exec/s: 16 rss: 73Mb L: 2/5 MS: 1 CopyPart- 00:08:00.146 [2024-06-07 22:58:52.329159] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.146 [2024-06-07 22:58:52.329195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.146 [2024-06-07 22:58:52.329337] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.146 [2024-06-07 22:58:52.329357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:00.146 #17 NEW cov: 12052 ft: 14918 corp: 16/30b lim: 5 exec/s: 17 rss: 73Mb L: 2/5 MS: 1 CrossOver- 00:08:00.147 [2024-06-07 22:58:52.409060] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.147 [2024-06-07 22:58:52.409097] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.406 #18 NEW cov: 12052 ft: 14935 corp: 17/31b lim: 5 exec/s: 18 rss: 73Mb L: 1/5 MS: 1 ShuffleBytes- 00:08:00.406 [2024-06-07 22:58:52.469570] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.406 [2024-06-07 22:58:52.469612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.406 [2024-06-07 22:58:52.469747] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.406 [2024-06-07 22:58:52.469769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:00.406 #19 NEW cov: 12052 ft: 15003 corp: 18/33b lim: 5 exec/s: 19 rss: 73Mb L: 2/5 MS: 1 CrossOver- 00:08:00.406 [2024-06-07 22:58:52.529705] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.406 [2024-06-07 22:58:52.529742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.406 [2024-06-07 22:58:52.529894] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.406 [2024-06-07 22:58:52.529915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:00.406 #20 NEW cov: 12052 ft: 15019 corp: 19/35b lim: 5 exec/s: 20 rss: 73Mb L: 2/5 MS: 1 ChangeByte- 00:08:00.406 [2024-06-07 22:58:52.610983] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.406 [2024-06-07 22:58:52.611019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.406 [2024-06-07 22:58:52.611169] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.406 [2024-06-07 22:58:52.611191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:00.406 [2024-06-07 22:58:52.611328] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.406 [2024-06-07 22:58:52.611350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:00.406 [2024-06-07 22:58:52.611496] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.406 [2024-06-07 22:58:52.611519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:00.406 [2024-06-07 22:58:52.611657] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.406 [2024-06-07 22:58:52.611680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:00.406 #21 NEW cov: 12052 ft: 15034 corp: 20/40b lim: 5 exec/s: 21 rss: 73Mb L: 5/5 MS: 1 PersAutoDict- DE: "\002\000\000\000"- 00:08:00.666 [2024-06-07 22:58:52.690307] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.666 [2024-06-07 22:58:52.690343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.666 [2024-06-07 22:58:52.690488] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.666 [2024-06-07 22:58:52.690509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:00.666 #22 NEW cov: 12052 ft: 15044 corp: 21/42b lim: 5 exec/s: 22 rss: 73Mb L: 2/5 MS: 1 ChangeByte- 00:08:00.666 [2024-06-07 22:58:52.771117] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.666 [2024-06-07 22:58:52.771152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.666 [2024-06-07 22:58:52.771291] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.666 [2024-06-07 22:58:52.771313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:00.666 [2024-06-07 22:58:52.771465] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.666 [2024-06-07 22:58:52.771486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:00.666 [2024-06-07 22:58:52.771634] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.666 [2024-06-07 22:58:52.771656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:00.666 #23 NEW cov: 12052 ft: 15120 corp: 22/46b lim: 5 exec/s: 23 rss: 74Mb L: 4/5 MS: 1 InsertRepeatedBytes- 00:08:00.666 [2024-06-07 22:58:52.831325] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.666 [2024-06-07 22:58:52.831363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.666 [2024-06-07 22:58:52.831499] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.666 [2024-06-07 22:58:52.831520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:00.666 [2024-06-07 22:58:52.831662] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.666 [2024-06-07 22:58:52.831682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:00.666 [2024-06-07 22:58:52.831814] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.666 [2024-06-07 22:58:52.831837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:00.666 #24 NEW cov: 12052 ft: 15137 corp: 23/50b lim: 5 exec/s: 24 rss: 74Mb L: 4/5 MS: 1 CMP- DE: "\002\000"- 00:08:00.666 [2024-06-07 22:58:52.910886] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.666 [2024-06-07 22:58:52.910922] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.666 [2024-06-07 22:58:52.911065] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.666 [2024-06-07 22:58:52.911087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:00.925 #25 NEW cov: 12052 ft: 15164 corp: 24/52b lim: 5 exec/s: 25 rss: 74Mb L: 2/5 MS: 1 CrossOver- 00:08:00.925 [2024-06-07 22:58:52.991824] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.925 [2024-06-07 22:58:52.991861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.925 [2024-06-07 22:58:52.991996] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.925 [2024-06-07 22:58:52.992018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:00.925 [2024-06-07 22:58:52.992156] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.925 [2024-06-07 22:58:52.992177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:00.925 [2024-06-07 22:58:52.992310] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.925 [2024-06-07 22:58:52.992335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:00.925 #26 NEW cov: 12052 ft: 15177 corp: 25/56b lim: 5 exec/s: 13 rss: 74Mb L: 4/5 MS: 1 ChangeBit- 00:08:00.925 #26 DONE cov: 12052 ft: 15177 corp: 25/56b lim: 5 exec/s: 13 rss: 74Mb 00:08:00.925 ###### Recommended dictionary. ###### 00:08:00.925 "\002\000\000\000" # Uses: 1 00:08:00.925 "\002\000" # Uses: 0 00:08:00.925 ###### End of recommended dictionary. ###### 00:08:00.925 Done 26 runs in 2 second(s) 00:08:00.925 22:58:53 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_8.conf /var/tmp/suppress_nvmf_fuzz 00:08:00.925 22:58:53 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:00.925 22:58:53 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:00.925 22:58:53 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 9 1 0x1 00:08:00.925 22:58:53 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=9 00:08:00.925 22:58:53 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:00.925 22:58:53 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:00.925 22:58:53 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:08:00.925 22:58:53 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_9.conf 00:08:00.925 22:58:53 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:00.925 22:58:53 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:00.925 22:58:53 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 9 00:08:00.925 22:58:53 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4409 00:08:00.925 22:58:53 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:08:00.925 22:58:53 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' 00:08:00.925 22:58:53 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4409"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:01.184 22:58:53 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:01.184 22:58:53 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:01.184 22:58:53 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' -c /tmp/fuzz_json_9.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 -Z 9 00:08:01.184 [2024-06-07 22:58:53.229353] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:08:01.184 [2024-06-07 22:58:53.229427] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4153301 ] 00:08:01.184 EAL: No free 2048 kB hugepages reported on node 1 00:08:01.442 [2024-06-07 22:58:53.540059] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.443 [2024-06-07 22:58:53.643170] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.443 [2024-06-07 22:58:53.706168] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:01.702 [2024-06-07 22:58:53.722541] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4409 *** 00:08:01.702 INFO: Running with entropic power schedule (0xFF, 100). 00:08:01.702 INFO: Seed: 1107549495 00:08:01.702 INFO: Loaded 1 modules (357552 inline 8-bit counters): 357552 [0x29a2fcc, 0x29fa47c), 00:08:01.702 INFO: Loaded 1 PC tables (357552 PCs): 357552 [0x29fa480,0x2f6ef80), 00:08:01.702 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:08:01.702 INFO: A corpus is not provided, starting from an empty corpus 00:08:01.702 [2024-06-07 22:58:53.778049] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.702 [2024-06-07 22:58:53.778088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:01.702 #2 INITED cov: 11808 ft: 11809 corp: 1/1b exec/s: 0 rss: 69Mb 00:08:01.702 [2024-06-07 22:58:53.818160] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.702 [2024-06-07 22:58:53.818193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:01.702 [2024-06-07 22:58:53.818259] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.702 [2024-06-07 22:58:53.818278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:01.702 #3 NEW cov: 11938 ft: 13254 corp: 2/3b lim: 5 exec/s: 0 rss: 70Mb L: 2/2 MS: 1 CopyPart- 00:08:01.702 [2024-06-07 22:58:53.888216] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.702 [2024-06-07 22:58:53.888250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:01.702 #4 NEW cov: 11944 ft: 13532 corp: 3/4b lim: 5 exec/s: 0 rss: 70Mb L: 1/2 MS: 1 ShuffleBytes- 00:08:01.702 [2024-06-07 22:58:53.938530] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.702 [2024-06-07 22:58:53.938563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:01.702 [2024-06-07 22:58:53.938641] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.702 [2024-06-07 22:58:53.938660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:01.702 #5 NEW cov: 12029 ft: 13756 corp: 4/6b lim: 5 exec/s: 0 rss: 70Mb L: 2/2 MS: 1 CopyPart- 00:08:01.961 [2024-06-07 22:58:53.988684] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.961 [2024-06-07 22:58:53.988717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:01.961 [2024-06-07 22:58:53.988783] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.961 [2024-06-07 22:58:53.988801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:01.961 #6 NEW cov: 12029 ft: 13876 corp: 5/8b lim: 5 exec/s: 0 rss: 70Mb L: 2/2 MS: 1 ChangeBit- 00:08:01.961 [2024-06-07 22:58:54.059203] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.961 [2024-06-07 22:58:54.059235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:01.961 [2024-06-07 22:58:54.059298] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.961 [2024-06-07 22:58:54.059316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:01.961 [2024-06-07 22:58:54.059379] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.961 [2024-06-07 22:58:54.059396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:01.961 [2024-06-07 22:58:54.059464] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.961 [2024-06-07 22:58:54.059483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:01.961 #7 NEW cov: 12029 ft: 14248 corp: 6/12b lim: 5 exec/s: 0 rss: 70Mb L: 4/4 MS: 1 CopyPart- 00:08:01.961 [2024-06-07 22:58:54.128891] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.961 [2024-06-07 22:58:54.128923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:01.961 #8 NEW cov: 12029 ft: 14289 corp: 7/13b lim: 5 exec/s: 0 rss: 70Mb L: 1/4 MS: 1 ChangeBit- 00:08:01.961 [2024-06-07 22:58:54.168989] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.961 [2024-06-07 22:58:54.169022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:01.961 #9 NEW cov: 12029 ft: 14382 corp: 8/14b lim: 5 exec/s: 0 rss: 70Mb L: 1/4 MS: 1 ChangeBit- 00:08:01.961 [2024-06-07 22:58:54.229181] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.961 [2024-06-07 22:58:54.229215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:02.220 #10 NEW cov: 12029 ft: 14403 corp: 9/15b lim: 5 exec/s: 0 rss: 70Mb L: 1/4 MS: 1 EraseBytes- 00:08:02.220 [2024-06-07 22:58:54.280053] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.220 [2024-06-07 22:58:54.280086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:02.220 [2024-06-07 22:58:54.280152] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.220 [2024-06-07 22:58:54.280171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:02.220 [2024-06-07 22:58:54.280236] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.220 [2024-06-07 22:58:54.280254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:02.220 [2024-06-07 22:58:54.280316] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.220 [2024-06-07 22:58:54.280334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:02.220 [2024-06-07 22:58:54.280401] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.220 [2024-06-07 22:58:54.280418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:02.220 #11 NEW cov: 12029 ft: 14474 corp: 10/20b lim: 5 exec/s: 0 rss: 70Mb L: 5/5 MS: 1 CrossOver- 00:08:02.220 [2024-06-07 22:58:54.349479] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.220 [2024-06-07 22:58:54.349512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:02.221 #12 NEW cov: 12029 ft: 14494 corp: 11/21b lim: 5 exec/s: 0 rss: 70Mb L: 1/5 MS: 1 CrossOver- 00:08:02.221 [2024-06-07 22:58:54.410223] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.221 [2024-06-07 22:58:54.410256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:02.221 [2024-06-07 22:58:54.410319] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.221 [2024-06-07 22:58:54.410337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:02.221 [2024-06-07 22:58:54.410402] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.221 [2024-06-07 22:58:54.410420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:02.221 [2024-06-07 22:58:54.410484] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.221 [2024-06-07 22:58:54.410500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:02.221 #13 NEW cov: 12029 ft: 14526 corp: 12/25b lim: 5 exec/s: 0 rss: 70Mb L: 4/5 MS: 1 ShuffleBytes- 00:08:02.221 [2024-06-07 22:58:54.460030] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.221 [2024-06-07 22:58:54.460063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:02.221 [2024-06-07 22:58:54.460130] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.221 [2024-06-07 22:58:54.460149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:02.221 #14 NEW cov: 12029 ft: 14558 corp: 13/27b lim: 5 exec/s: 0 rss: 70Mb L: 2/5 MS: 1 CrossOver- 00:08:02.501 [2024-06-07 22:58:54.510689] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.501 [2024-06-07 22:58:54.510723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:02.501 [2024-06-07 22:58:54.510788] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.501 [2024-06-07 22:58:54.510806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:02.501 [2024-06-07 22:58:54.510869] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.501 [2024-06-07 22:58:54.510887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:02.501 [2024-06-07 22:58:54.510950] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.501 [2024-06-07 22:58:54.510968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:02.501 [2024-06-07 22:58:54.511031] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.501 [2024-06-07 22:58:54.511049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:02.501 #15 NEW cov: 12029 ft: 14659 corp: 14/32b lim: 5 exec/s: 0 rss: 70Mb L: 5/5 MS: 1 InsertByte- 00:08:02.501 [2024-06-07 22:58:54.580130] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.501 [2024-06-07 22:58:54.580164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:02.502 #16 NEW cov: 12029 ft: 14680 corp: 15/33b lim: 5 exec/s: 0 rss: 70Mb L: 1/5 MS: 1 CrossOver- 00:08:02.502 [2024-06-07 22:58:54.621023] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.502 [2024-06-07 22:58:54.621055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:02.502 [2024-06-07 22:58:54.621123] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.502 [2024-06-07 22:58:54.621142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:02.502 [2024-06-07 22:58:54.621206] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.502 [2024-06-07 22:58:54.621225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:02.502 [2024-06-07 22:58:54.621289] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.502 [2024-06-07 22:58:54.621307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:02.502 [2024-06-07 22:58:54.621374] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.502 [2024-06-07 22:58:54.621392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:02.763 NEW_FUNC[1/1]: 0x1a74020 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:601 00:08:02.763 #17 NEW cov: 12052 ft: 14739 corp: 16/38b lim: 5 exec/s: 17 rss: 71Mb L: 5/5 MS: 1 ChangeBinInt- 00:08:02.763 [2024-06-07 22:58:54.951256] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.763 [2024-06-07 22:58:54.951297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:02.763 #18 NEW cov: 12052 ft: 14767 corp: 17/39b lim: 5 exec/s: 18 rss: 71Mb L: 1/5 MS: 1 CrossOver- 00:08:02.763 [2024-06-07 22:58:54.991462] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.763 [2024-06-07 22:58:54.991497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:02.763 [2024-06-07 22:58:54.991561] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.763 [2024-06-07 22:58:54.991589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:03.023 #19 NEW cov: 12052 ft: 14775 corp: 18/41b lim: 5 exec/s: 19 rss: 71Mb L: 2/5 MS: 1 ShuffleBytes- 00:08:03.023 [2024-06-07 22:58:55.061978] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.023 [2024-06-07 22:58:55.062010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.023 [2024-06-07 22:58:55.062078] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.023 [2024-06-07 22:58:55.062097] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:03.023 [2024-06-07 22:58:55.062163] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.023 [2024-06-07 22:58:55.062181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:03.023 [2024-06-07 22:58:55.062244] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.023 [2024-06-07 22:58:55.062262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:03.023 #20 NEW cov: 12052 ft: 14797 corp: 19/45b lim: 5 exec/s: 20 rss: 71Mb L: 4/5 MS: 1 ChangeBinInt- 00:08:03.023 [2024-06-07 22:58:55.111922] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.023 [2024-06-07 22:58:55.111955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.023 [2024-06-07 22:58:55.112024] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.023 [2024-06-07 22:58:55.112042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:03.023 [2024-06-07 22:58:55.112107] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.023 [2024-06-07 22:58:55.112125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:03.023 #21 NEW cov: 12052 ft: 14963 corp: 20/48b lim: 5 exec/s: 21 rss: 71Mb L: 3/5 MS: 1 EraseBytes- 00:08:03.023 [2024-06-07 22:58:55.181918] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.023 [2024-06-07 22:58:55.181950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.023 [2024-06-07 22:58:55.182017] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.023 [2024-06-07 22:58:55.182035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:03.023 #22 NEW cov: 12052 ft: 14987 corp: 21/50b lim: 5 exec/s: 22 rss: 72Mb L: 2/5 MS: 1 CopyPart- 00:08:03.023 [2024-06-07 22:58:55.252629] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.023 [2024-06-07 22:58:55.252662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.023 [2024-06-07 22:58:55.252724] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.023 [2024-06-07 22:58:55.252742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:03.023 [2024-06-07 22:58:55.252808] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.023 [2024-06-07 22:58:55.252830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:03.023 [2024-06-07 22:58:55.252893] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.024 [2024-06-07 22:58:55.252911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:03.024 [2024-06-07 22:58:55.252972] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.024 [2024-06-07 22:58:55.252991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:03.283 #23 NEW cov: 12052 ft: 15031 corp: 22/55b lim: 5 exec/s: 23 rss: 72Mb L: 5/5 MS: 1 CrossOver- 00:08:03.283 [2024-06-07 22:58:55.322314] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.283 [2024-06-07 22:58:55.322347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.283 [2024-06-07 22:58:55.322412] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.283 [2024-06-07 22:58:55.322430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:03.283 #24 NEW cov: 12052 ft: 15052 corp: 23/57b lim: 5 exec/s: 24 rss: 72Mb L: 2/5 MS: 1 CrossOver- 00:08:03.283 [2024-06-07 22:58:55.372762] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.283 [2024-06-07 22:58:55.372794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.283 [2024-06-07 22:58:55.372860] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.283 [2024-06-07 22:58:55.372879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:03.283 [2024-06-07 22:58:55.372941] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.283 [2024-06-07 22:58:55.372959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:03.283 [2024-06-07 22:58:55.373022] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.283 [2024-06-07 22:58:55.373041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:03.283 #25 NEW cov: 12052 ft: 15086 corp: 24/61b lim: 5 exec/s: 25 rss: 72Mb L: 4/5 MS: 1 InsertRepeatedBytes- 00:08:03.283 [2024-06-07 22:58:55.443175] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.283 [2024-06-07 22:58:55.443208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.283 [2024-06-07 22:58:55.443271] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.283 [2024-06-07 22:58:55.443289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:03.283 [2024-06-07 22:58:55.443350] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.283 [2024-06-07 22:58:55.443372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:03.283 [2024-06-07 22:58:55.443429] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.283 [2024-06-07 22:58:55.443447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:03.283 [2024-06-07 22:58:55.443508] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.283 [2024-06-07 22:58:55.443526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:03.283 #26 NEW cov: 12052 ft: 15110 corp: 25/66b lim: 5 exec/s: 26 rss: 72Mb L: 5/5 MS: 1 ChangeBit- 00:08:03.283 [2024-06-07 22:58:55.492595] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.283 [2024-06-07 22:58:55.492628] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.283 #27 NEW cov: 12052 ft: 15121 corp: 26/67b lim: 5 exec/s: 27 rss: 72Mb L: 1/5 MS: 1 ChangeByte- 00:08:03.283 [2024-06-07 22:58:55.543461] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.283 [2024-06-07 22:58:55.543495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.283 [2024-06-07 22:58:55.543560] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.283 [2024-06-07 22:58:55.543583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:03.283 [2024-06-07 22:58:55.543647] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.283 [2024-06-07 22:58:55.543667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:03.283 [2024-06-07 22:58:55.543731] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.283 [2024-06-07 22:58:55.543748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:03.283 [2024-06-07 22:58:55.543814] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.283 [2024-06-07 22:58:55.543833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:03.543 #28 NEW cov: 12052 ft: 15123 corp: 27/72b lim: 5 exec/s: 28 rss: 72Mb L: 5/5 MS: 1 ChangeBit- 00:08:03.543 [2024-06-07 22:58:55.613618] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.543 [2024-06-07 22:58:55.613652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.543 [2024-06-07 22:58:55.613717] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.543 [2024-06-07 22:58:55.613737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:03.543 [2024-06-07 22:58:55.613801] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.543 [2024-06-07 22:58:55.613823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:03.543 [2024-06-07 22:58:55.613889] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.543 [2024-06-07 22:58:55.613907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:03.543 [2024-06-07 22:58:55.613971] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.543 [2024-06-07 22:58:55.613989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:03.543 [2024-06-07 22:58:55.683878] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.543 [2024-06-07 22:58:55.683913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.543 [2024-06-07 22:58:55.683978] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.543 [2024-06-07 22:58:55.683997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:03.543 [2024-06-07 22:58:55.684057] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.543 [2024-06-07 22:58:55.684075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:03.543 [2024-06-07 22:58:55.684136] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.543 [2024-06-07 22:58:55.684154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:03.543 [2024-06-07 22:58:55.684214] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.543 [2024-06-07 22:58:55.684231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:03.543 #30 NEW cov: 12052 ft: 15165 corp: 28/77b lim: 5 exec/s: 30 rss: 72Mb L: 5/5 MS: 2 ChangeBinInt-ChangeByte- 00:08:03.543 [2024-06-07 22:58:55.734035] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.543 [2024-06-07 22:58:55.734068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.543 [2024-06-07 22:58:55.734136] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.543 [2024-06-07 22:58:55.734154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:03.543 [2024-06-07 22:58:55.734220] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.543 [2024-06-07 22:58:55.734238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:03.543 [2024-06-07 22:58:55.734304] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.543 [2024-06-07 22:58:55.734325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:03.543 [2024-06-07 22:58:55.734388] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:0000000c cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.543 [2024-06-07 22:58:55.734406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:03.543 #31 NEW cov: 12052 ft: 15180 corp: 29/82b lim: 5 exec/s: 15 rss: 72Mb L: 5/5 MS: 1 ChangeByte- 00:08:03.543 #31 DONE cov: 12052 ft: 15180 corp: 29/82b lim: 5 exec/s: 15 rss: 72Mb 00:08:03.543 Done 31 runs in 2 second(s) 00:08:03.803 22:58:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_9.conf /var/tmp/suppress_nvmf_fuzz 00:08:03.803 22:58:55 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:03.803 22:58:55 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:03.803 22:58:55 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 10 1 0x1 00:08:03.803 22:58:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=10 00:08:03.803 22:58:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:03.803 22:58:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:03.803 22:58:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:08:03.803 22:58:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_10.conf 00:08:03.803 22:58:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:03.803 22:58:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:03.803 22:58:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 10 00:08:03.803 22:58:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4410 00:08:03.803 22:58:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:08:03.803 22:58:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' 00:08:03.803 22:58:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4410"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:03.803 22:58:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:03.803 22:58:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:03.803 22:58:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' -c /tmp/fuzz_json_10.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 -Z 10 00:08:03.803 [2024-06-07 22:58:55.965436] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:08:03.803 [2024-06-07 22:58:55.965524] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4153699 ] 00:08:03.803 EAL: No free 2048 kB hugepages reported on node 1 00:08:04.062 [2024-06-07 22:58:56.279125] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.321 [2024-06-07 22:58:56.381866] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.321 [2024-06-07 22:58:56.444172] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:04.321 [2024-06-07 22:58:56.460555] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4410 *** 00:08:04.321 INFO: Running with entropic power schedule (0xFF, 100). 00:08:04.321 INFO: Seed: 3845547929 00:08:04.321 INFO: Loaded 1 modules (357552 inline 8-bit counters): 357552 [0x29a2fcc, 0x29fa47c), 00:08:04.321 INFO: Loaded 1 PC tables (357552 PCs): 357552 [0x29fa480,0x2f6ef80), 00:08:04.321 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:08:04.321 INFO: A corpus is not provided, starting from an empty corpus 00:08:04.321 #2 INITED exec/s: 0 rss: 64Mb 00:08:04.321 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:04.321 This may also happen if the target rejected all inputs we tried so far 00:08:04.321 [2024-06-07 22:58:56.516093] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a686868 cdw11:68686868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:04.321 [2024-06-07 22:58:56.516129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:04.321 [2024-06-07 22:58:56.516200] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:68686868 cdw11:68686868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:04.321 [2024-06-07 22:58:56.516218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:04.889 NEW_FUNC[1/684]: 0x48fcf0 in fuzz_admin_security_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:205 00:08:04.889 NEW_FUNC[2/684]: 0x4bf580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:04.889 #7 NEW cov: 11830 ft: 11821 corp: 2/22b lim: 40 exec/s: 0 rss: 71Mb L: 21/21 MS: 5 InsertByte-CrossOver-CrossOver-ChangeByte-InsertRepeatedBytes- 00:08:04.889 [2024-06-07 22:58:56.967532] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a686868 cdw11:68686868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:04.889 [2024-06-07 22:58:56.967574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:04.889 [2024-06-07 22:58:56.967656] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:68686868 cdw11:68686868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:04.889 [2024-06-07 22:58:56.967674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:04.889 [2024-06-07 22:58:56.967748] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:68686868 cdw11:6868687a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:04.889 [2024-06-07 22:58:56.967766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:04.889 NEW_FUNC[1/1]: 0x1d88230 in _get_thread /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:332 00:08:04.889 #13 NEW cov: 11961 ft: 12672 corp: 3/46b lim: 40 exec/s: 0 rss: 72Mb L: 24/24 MS: 1 CopyPart- 00:08:04.889 [2024-06-07 22:58:57.037437] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a686868 cdw11:68686868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:04.889 [2024-06-07 22:58:57.037472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:04.889 [2024-06-07 22:58:57.037548] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:68686868 cdw11:68686830 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:04.889 [2024-06-07 22:58:57.037567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:04.889 #19 NEW cov: 11967 ft: 12981 corp: 4/67b lim: 40 exec/s: 0 rss: 72Mb L: 21/24 MS: 1 ChangeByte- 00:08:04.889 [2024-06-07 22:58:57.087599] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a686868 cdw11:68686868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:04.889 [2024-06-07 22:58:57.087632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:04.889 [2024-06-07 22:58:57.087708] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:68686868 cdw11:68686830 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:04.889 [2024-06-07 22:58:57.087731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:04.889 #20 NEW cov: 12052 ft: 13222 corp: 5/88b lim: 40 exec/s: 0 rss: 72Mb L: 21/24 MS: 1 ChangeBit- 00:08:04.889 [2024-06-07 22:58:57.157930] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a686d68 cdw11:68686868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:04.889 [2024-06-07 22:58:57.157964] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:04.889 [2024-06-07 22:58:57.158042] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:68686868 cdw11:68686868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:04.889 [2024-06-07 22:58:57.158061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:04.890 [2024-06-07 22:58:57.158132] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:68686868 cdw11:68686868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:04.890 [2024-06-07 22:58:57.158150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:05.149 #21 NEW cov: 12052 ft: 13286 corp: 6/113b lim: 40 exec/s: 0 rss: 72Mb L: 25/25 MS: 1 InsertByte- 00:08:05.149 [2024-06-07 22:58:57.228139] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a686868 cdw11:68686868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.149 [2024-06-07 22:58:57.228174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:05.149 [2024-06-07 22:58:57.228248] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:68686869 cdw11:68686868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.149 [2024-06-07 22:58:57.228267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:05.149 [2024-06-07 22:58:57.228341] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:68686868 cdw11:6868687a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.149 [2024-06-07 22:58:57.228360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:05.149 #22 NEW cov: 12052 ft: 13335 corp: 7/137b lim: 40 exec/s: 0 rss: 72Mb L: 24/25 MS: 1 ChangeBit- 00:08:05.149 [2024-06-07 22:58:57.278291] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.149 [2024-06-07 22:58:57.278324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:05.149 [2024-06-07 22:58:57.278397] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.149 [2024-06-07 22:58:57.278416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:05.149 [2024-06-07 22:58:57.278488] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffff27 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.149 [2024-06-07 22:58:57.278506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:05.149 #24 NEW cov: 12052 ft: 13386 corp: 8/161b lim: 40 exec/s: 0 rss: 72Mb L: 24/25 MS: 2 ChangeByte-InsertRepeatedBytes- 00:08:05.149 [2024-06-07 22:58:57.328200] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a686868 cdw11:68686868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.149 [2024-06-07 22:58:57.328233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:05.149 [2024-06-07 22:58:57.328305] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:68686830 cdw11:68686868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.149 [2024-06-07 22:58:57.328328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:05.149 #25 NEW cov: 12052 ft: 13519 corp: 9/182b lim: 40 exec/s: 0 rss: 72Mb L: 21/25 MS: 1 CrossOver- 00:08:05.149 [2024-06-07 22:58:57.378538] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.149 [2024-06-07 22:58:57.378571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:05.149 [2024-06-07 22:58:57.378652] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.149 [2024-06-07 22:58:57.378671] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:05.149 [2024-06-07 22:58:57.378742] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffff0900 cdw11:0000ff27 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.149 [2024-06-07 22:58:57.378760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:05.408 NEW_FUNC[1/1]: 0x1a74020 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:601 00:08:05.408 #26 NEW cov: 12075 ft: 13595 corp: 10/206b lim: 40 exec/s: 0 rss: 73Mb L: 24/25 MS: 1 ChangeBinInt- 00:08:05.408 [2024-06-07 22:58:57.448712] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.408 [2024-06-07 22:58:57.448745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:05.408 [2024-06-07 22:58:57.448824] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.408 [2024-06-07 22:58:57.448842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:05.408 [2024-06-07 22:58:57.448911] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffff0900 cdw11:0000fd27 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.408 [2024-06-07 22:58:57.448928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:05.408 #27 NEW cov: 12075 ft: 13656 corp: 11/230b lim: 40 exec/s: 0 rss: 73Mb L: 24/25 MS: 1 ChangeBinInt- 00:08:05.408 [2024-06-07 22:58:57.518728] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0a0a0a cdw11:0a0a0a0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.408 [2024-06-07 22:58:57.518762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:05.408 [2024-06-07 22:58:57.518837] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:0a0a0a0a cdw11:0a0a0a0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.408 [2024-06-07 22:58:57.518857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:05.408 #29 NEW cov: 12075 ft: 13710 corp: 12/249b lim: 40 exec/s: 29 rss: 73Mb L: 19/25 MS: 2 CopyPart-InsertRepeatedBytes- 00:08:05.408 [2024-06-07 22:58:57.559049] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a686868 cdw11:68686868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.408 [2024-06-07 22:58:57.559082] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:05.408 [2024-06-07 22:58:57.559157] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:68686869 cdw11:68687a68 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.408 [2024-06-07 22:58:57.559180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:05.408 [2024-06-07 22:58:57.559253] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:68686868 cdw11:6868687a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.408 [2024-06-07 22:58:57.559271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:05.408 #30 NEW cov: 12075 ft: 13727 corp: 13/273b lim: 40 exec/s: 30 rss: 73Mb L: 24/25 MS: 1 CopyPart- 00:08:05.408 [2024-06-07 22:58:57.629078] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a686868 cdw11:68686868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.408 [2024-06-07 22:58:57.629112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:05.408 [2024-06-07 22:58:57.629188] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:68686868 cdw11:68686868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.408 [2024-06-07 22:58:57.629206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:05.408 #31 NEW cov: 12075 ft: 13767 corp: 14/290b lim: 40 exec/s: 31 rss: 73Mb L: 17/25 MS: 1 EraseBytes- 00:08:05.408 [2024-06-07 22:58:57.679201] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a686868 cdw11:68686868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.408 [2024-06-07 22:58:57.679234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:05.408 [2024-06-07 22:58:57.679311] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:68686830 cdw11:68686868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.408 [2024-06-07 22:58:57.679330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:05.668 #32 NEW cov: 12075 ft: 13799 corp: 15/311b lim: 40 exec/s: 32 rss: 73Mb L: 21/25 MS: 1 ChangeBit- 00:08:05.668 [2024-06-07 22:58:57.739370] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a686868 cdw11:60686868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.668 [2024-06-07 22:58:57.739402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:05.668 [2024-06-07 22:58:57.739476] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:68686830 cdw11:68686868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.668 [2024-06-07 22:58:57.739495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:05.668 #33 NEW cov: 12075 ft: 13834 corp: 16/332b lim: 40 exec/s: 33 rss: 73Mb L: 21/25 MS: 1 ChangeBit- 00:08:05.668 [2024-06-07 22:58:57.799701] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a686d68 cdw11:68686868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.668 [2024-06-07 22:58:57.799734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:05.668 [2024-06-07 22:58:57.799811] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:686a6868 cdw11:68686868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.668 [2024-06-07 22:58:57.799830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:05.668 [2024-06-07 22:58:57.799904] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:68686868 cdw11:68686868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.668 [2024-06-07 22:58:57.799923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:05.668 #34 NEW cov: 12075 ft: 13855 corp: 17/357b lim: 40 exec/s: 34 rss: 73Mb L: 25/25 MS: 1 ChangeBit- 00:08:05.668 [2024-06-07 22:58:57.869759] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0a0a0a cdw11:0a0a0a0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.668 [2024-06-07 22:58:57.869793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:05.668 [2024-06-07 22:58:57.869866] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:0a0a050a cdw11:0a0a0a0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.668 [2024-06-07 22:58:57.869886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:05.668 #35 NEW cov: 12075 ft: 13912 corp: 18/376b lim: 40 exec/s: 35 rss: 73Mb L: 19/25 MS: 1 ChangeBinInt- 00:08:05.668 [2024-06-07 22:58:57.929906] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0a0a0a cdw11:0a0a0a0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.668 [2024-06-07 22:58:57.929939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:05.668 [2024-06-07 22:58:57.930013] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:0a0a0a0a cdw11:0a0a0a0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.668 [2024-06-07 22:58:57.930032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:05.927 #36 NEW cov: 12075 ft: 13933 corp: 19/395b lim: 40 exec/s: 36 rss: 73Mb L: 19/25 MS: 1 ChangeByte- 00:08:05.928 [2024-06-07 22:58:57.980095] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.928 [2024-06-07 22:58:57.980128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:05.928 [2024-06-07 22:58:57.980199] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.928 [2024-06-07 22:58:57.980218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:05.928 #37 NEW cov: 12075 ft: 13967 corp: 20/416b lim: 40 exec/s: 37 rss: 73Mb L: 21/25 MS: 1 EraseBytes- 00:08:05.928 [2024-06-07 22:58:58.050247] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0a0a0a cdw11:0a0a0a0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.928 [2024-06-07 22:58:58.050281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:05.928 [2024-06-07 22:58:58.050353] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:0a0a050a cdw11:0a0a0a0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.928 [2024-06-07 22:58:58.050371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:05.928 #38 NEW cov: 12075 ft: 14015 corp: 21/435b lim: 40 exec/s: 38 rss: 73Mb L: 19/25 MS: 1 CrossOver- 00:08:05.928 [2024-06-07 22:58:58.120437] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a686868 cdw11:68686868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.928 [2024-06-07 22:58:58.120470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:05.928 [2024-06-07 22:58:58.120546] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:68686830 cdw11:68786868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.928 [2024-06-07 22:58:58.120565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:05.928 #39 NEW cov: 12075 ft: 14070 corp: 22/456b lim: 40 exec/s: 39 rss: 73Mb L: 21/25 MS: 1 ChangeBit- 00:08:05.928 [2024-06-07 22:58:58.170567] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a686868 cdw11:68686868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.928 [2024-06-07 22:58:58.170607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:05.928 [2024-06-07 22:58:58.170680] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:68686830 cdw11:68786868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.928 [2024-06-07 22:58:58.170698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:06.187 #40 NEW cov: 12075 ft: 14073 corp: 23/477b lim: 40 exec/s: 40 rss: 74Mb L: 21/25 MS: 1 ChangeBinInt- 00:08:06.187 [2024-06-07 22:58:58.230922] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a686868 cdw11:68686868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:06.187 [2024-06-07 22:58:58.230955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:06.187 [2024-06-07 22:58:58.231030] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:68686868 cdw11:68686868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:06.187 [2024-06-07 22:58:58.231049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:06.187 [2024-06-07 22:58:58.231120] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:68686868 cdw11:6868687a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:06.187 [2024-06-07 22:58:58.231139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:06.187 #41 NEW cov: 12075 ft: 14108 corp: 24/501b lim: 40 exec/s: 41 rss: 74Mb L: 24/25 MS: 1 ShuffleBytes- 00:08:06.187 [2024-06-07 22:58:58.281099] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0a0a0a cdw11:0a0a0aff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:06.187 [2024-06-07 22:58:58.281131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:06.187 [2024-06-07 22:58:58.281207] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:06.187 [2024-06-07 22:58:58.281226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:06.187 [2024-06-07 22:58:58.281298] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:0a0a0a0a cdw11:0a0a0a0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:06.187 [2024-06-07 22:58:58.281316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:06.187 #42 NEW cov: 12075 ft: 14120 corp: 25/529b lim: 40 exec/s: 42 rss: 74Mb L: 28/28 MS: 1 CrossOver- 00:08:06.187 [2024-06-07 22:58:58.331101] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0adb6868 cdw11:60686868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:06.187 [2024-06-07 22:58:58.331134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:06.187 [2024-06-07 22:58:58.331211] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:68686830 cdw11:68686868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:06.187 [2024-06-07 22:58:58.331230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:06.187 #43 NEW cov: 12075 ft: 14178 corp: 26/550b lim: 40 exec/s: 43 rss: 74Mb L: 21/28 MS: 1 ChangeByte- 00:08:06.187 [2024-06-07 22:58:58.401261] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a686868 cdw11:68686868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:06.187 [2024-06-07 22:58:58.401298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:06.187 [2024-06-07 22:58:58.401373] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:68686835 cdw11:68686868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:06.187 [2024-06-07 22:58:58.401392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:06.187 #44 NEW cov: 12075 ft: 14180 corp: 27/571b lim: 40 exec/s: 44 rss: 74Mb L: 21/28 MS: 1 ChangeASCIIInt- 00:08:06.187 [2024-06-07 22:58:58.441233] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:68686868 cdw11:30687868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:06.187 [2024-06-07 22:58:58.441265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:06.447 #45 NEW cov: 12075 ft: 14537 corp: 28/585b lim: 40 exec/s: 45 rss: 74Mb L: 14/28 MS: 1 EraseBytes- 00:08:06.447 [2024-06-07 22:58:58.511399] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:68786868 cdw11:68686868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:06.448 [2024-06-07 22:58:58.511433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:06.448 #46 NEW cov: 12075 ft: 14541 corp: 29/594b lim: 40 exec/s: 23 rss: 74Mb L: 9/28 MS: 1 EraseBytes- 00:08:06.448 #46 DONE cov: 12075 ft: 14541 corp: 29/594b lim: 40 exec/s: 23 rss: 74Mb 00:08:06.448 Done 46 runs in 2 second(s) 00:08:06.448 22:58:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_10.conf /var/tmp/suppress_nvmf_fuzz 00:08:06.448 22:58:58 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:06.448 22:58:58 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:06.448 22:58:58 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 11 1 0x1 00:08:06.448 22:58:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=11 00:08:06.448 22:58:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:06.448 22:58:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:06.448 22:58:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:08:06.448 22:58:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_11.conf 00:08:06.448 22:58:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:06.448 22:58:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:06.448 22:58:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 11 00:08:06.448 22:58:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4411 00:08:06.448 22:58:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:08:06.448 22:58:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' 00:08:06.448 22:58:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4411"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:06.448 22:58:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:06.448 22:58:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:06.448 22:58:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' -c /tmp/fuzz_json_11.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 -Z 11 00:08:06.707 [2024-06-07 22:58:58.744448] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:08:06.707 [2024-06-07 22:58:58.744520] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4154131 ] 00:08:06.707 EAL: No free 2048 kB hugepages reported on node 1 00:08:06.967 [2024-06-07 22:58:59.040080] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.967 [2024-06-07 22:58:59.130571] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.967 [2024-06-07 22:58:59.192925] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:06.967 [2024-06-07 22:58:59.209303] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4411 *** 00:08:06.967 INFO: Running with entropic power schedule (0xFF, 100). 00:08:06.967 INFO: Seed: 2298590698 00:08:07.226 INFO: Loaded 1 modules (357552 inline 8-bit counters): 357552 [0x29a2fcc, 0x29fa47c), 00:08:07.226 INFO: Loaded 1 PC tables (357552 PCs): 357552 [0x29fa480,0x2f6ef80), 00:08:07.226 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:08:07.226 INFO: A corpus is not provided, starting from an empty corpus 00:08:07.226 #2 INITED exec/s: 0 rss: 64Mb 00:08:07.226 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:07.226 This may also happen if the target rejected all inputs we tried so far 00:08:07.226 [2024-06-07 22:58:59.265124] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a98ffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.226 [2024-06-07 22:58:59.265161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:07.226 [2024-06-07 22:58:59.265233] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.226 [2024-06-07 22:58:59.265254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:07.485 NEW_FUNC[1/686]: 0x491a60 in fuzz_admin_security_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:223 00:08:07.485 NEW_FUNC[2/686]: 0x4bf580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:07.485 #16 NEW cov: 11842 ft: 11838 corp: 2/21b lim: 40 exec/s: 0 rss: 71Mb L: 20/20 MS: 4 CrossOver-CrossOver-InsertByte-InsertRepeatedBytes- 00:08:07.485 [2024-06-07 22:58:59.715856] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0b0b0bb7 cdw11:b7b7b7b7 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.485 [2024-06-07 22:58:59.715897] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:07.485 #20 NEW cov: 11973 ft: 13175 corp: 3/29b lim: 40 exec/s: 0 rss: 71Mb L: 8/20 MS: 4 ChangeBit-CopyPart-CopyPart-InsertRepeatedBytes- 00:08:07.745 [2024-06-07 22:58:59.766605] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a98ffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.745 [2024-06-07 22:58:59.766640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:07.745 [2024-06-07 22:58:59.766704] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.745 [2024-06-07 22:58:59.766723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:07.745 [2024-06-07 22:58:59.766788] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:68686868 cdw11:68686868 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.745 [2024-06-07 22:58:59.766806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:07.745 [2024-06-07 22:58:59.766870] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:68686868 cdw11:68686868 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.745 [2024-06-07 22:58:59.766894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:07.745 [2024-06-07 22:58:59.766955] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:68686868 cdw11:ffff0a0a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.745 [2024-06-07 22:58:59.766972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:07.745 #21 NEW cov: 11979 ft: 13735 corp: 4/69b lim: 40 exec/s: 0 rss: 71Mb L: 40/40 MS: 1 InsertRepeatedBytes- 00:08:07.745 [2024-06-07 22:58:59.836762] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a98ffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.745 [2024-06-07 22:58:59.836797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:07.745 [2024-06-07 22:58:59.836865] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.745 [2024-06-07 22:58:59.836883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:07.745 [2024-06-07 22:58:59.836951] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:68686868 cdw11:68686868 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.745 [2024-06-07 22:58:59.836969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:07.745 [2024-06-07 22:58:59.837035] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:68686868 cdw11:91979797 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.745 [2024-06-07 22:58:59.837053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:07.745 [2024-06-07 22:58:59.837118] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:97979797 cdw11:ffff0a0a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.745 [2024-06-07 22:58:59.837136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:07.745 #22 NEW cov: 12064 ft: 14048 corp: 5/109b lim: 40 exec/s: 0 rss: 71Mb L: 40/40 MS: 1 ChangeBinInt- 00:08:07.745 [2024-06-07 22:58:59.906280] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0b0b0bb7 cdw11:b7b7b72b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.745 [2024-06-07 22:58:59.906312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:07.745 #23 NEW cov: 12064 ft: 14172 corp: 6/117b lim: 40 exec/s: 0 rss: 71Mb L: 8/40 MS: 1 ChangeByte- 00:08:07.745 [2024-06-07 22:58:59.976492] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a98ffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.745 [2024-06-07 22:58:59.976525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:07.745 #24 NEW cov: 12064 ft: 14236 corp: 7/127b lim: 40 exec/s: 0 rss: 71Mb L: 10/40 MS: 1 EraseBytes- 00:08:08.004 [2024-06-07 22:59:00.027357] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a98ffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.005 [2024-06-07 22:59:00.027391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.005 [2024-06-07 22:59:00.027453] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.005 [2024-06-07 22:59:00.027471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.005 [2024-06-07 22:59:00.027534] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:68686868 cdw11:68686868 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.005 [2024-06-07 22:59:00.027556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:08.005 [2024-06-07 22:59:00.027621] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:68686868 cdw11:91979797 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.005 [2024-06-07 22:59:00.027640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:08.005 [2024-06-07 22:59:00.027703] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:97979797 cdw11:feff0a0a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.005 [2024-06-07 22:59:00.027721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:08.005 #25 NEW cov: 12064 ft: 14303 corp: 8/167b lim: 40 exec/s: 0 rss: 71Mb L: 40/40 MS: 1 ChangeBit- 00:08:08.005 [2024-06-07 22:59:00.097548] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a98ffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.005 [2024-06-07 22:59:00.097590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.005 [2024-06-07 22:59:00.097663] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.005 [2024-06-07 22:59:00.097682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.005 [2024-06-07 22:59:00.097748] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:68686868 cdw11:68686868 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.005 [2024-06-07 22:59:00.097769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:08.005 [2024-06-07 22:59:00.097836] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:68686868 cdw11:91979797 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.005 [2024-06-07 22:59:00.097854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:08.005 [2024-06-07 22:59:00.097920] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:97979797 cdw11:feff0a0a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.005 [2024-06-07 22:59:00.097939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:08.005 NEW_FUNC[1/1]: 0x1a74020 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:601 00:08:08.005 #26 NEW cov: 12087 ft: 14364 corp: 9/207b lim: 40 exec/s: 0 rss: 72Mb L: 40/40 MS: 1 ShuffleBytes- 00:08:08.005 [2024-06-07 22:59:00.167385] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a98ffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.005 [2024-06-07 22:59:00.167418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.005 [2024-06-07 22:59:00.167488] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffff68 cdw11:68686868 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.005 [2024-06-07 22:59:00.167506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.005 [2024-06-07 22:59:00.167571] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:68686868 cdw11:68686868 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.005 [2024-06-07 22:59:00.167594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:08.005 #27 NEW cov: 12087 ft: 14647 corp: 10/235b lim: 40 exec/s: 0 rss: 72Mb L: 28/40 MS: 1 EraseBytes- 00:08:08.005 [2024-06-07 22:59:00.217885] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a98ffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.005 [2024-06-07 22:59:00.217918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.005 [2024-06-07 22:59:00.217989] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.005 [2024-06-07 22:59:00.218007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.005 [2024-06-07 22:59:00.218071] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:68686868 cdw11:68686868 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.005 [2024-06-07 22:59:00.218088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:08.005 [2024-06-07 22:59:00.218152] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:68686868 cdw11:68686868 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.005 [2024-06-07 22:59:00.218171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:08.005 [2024-06-07 22:59:00.218234] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:ffff6868 cdw11:68680a0a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.005 [2024-06-07 22:59:00.218253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:08.005 #28 NEW cov: 12087 ft: 14708 corp: 11/275b lim: 40 exec/s: 28 rss: 72Mb L: 40/40 MS: 1 ShuffleBytes- 00:08:08.005 [2024-06-07 22:59:00.268006] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a98ffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.005 [2024-06-07 22:59:00.268040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.005 [2024-06-07 22:59:00.268108] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.005 [2024-06-07 22:59:00.268126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.005 [2024-06-07 22:59:00.268192] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:68686868 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.005 [2024-06-07 22:59:00.268211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:08.005 [2024-06-07 22:59:00.268274] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:68686868 cdw11:68686868 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.005 [2024-06-07 22:59:00.268291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:08.005 [2024-06-07 22:59:00.268357] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:91979797 cdw11:97970a0a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.005 [2024-06-07 22:59:00.268376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:08.265 #29 NEW cov: 12087 ft: 14807 corp: 12/315b lim: 40 exec/s: 29 rss: 72Mb L: 40/40 MS: 1 CrossOver- 00:08:08.265 [2024-06-07 22:59:00.318143] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a98ffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.265 [2024-06-07 22:59:00.318176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.265 [2024-06-07 22:59:00.318242] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.265 [2024-06-07 22:59:00.318265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.265 [2024-06-07 22:59:00.318327] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:68686868 cdw11:68686868 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.265 [2024-06-07 22:59:00.318345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:08.265 [2024-06-07 22:59:00.318409] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:68686868 cdw11:68686868 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.265 [2024-06-07 22:59:00.318427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:08.265 [2024-06-07 22:59:00.318491] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:68686868 cdw11:ffff0a0a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.265 [2024-06-07 22:59:00.318509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:08.265 #30 NEW cov: 12087 ft: 14823 corp: 13/355b lim: 40 exec/s: 30 rss: 72Mb L: 40/40 MS: 1 ShuffleBytes- 00:08:08.265 [2024-06-07 22:59:00.368293] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a98ffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.265 [2024-06-07 22:59:00.368326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.265 [2024-06-07 22:59:00.368394] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.265 [2024-06-07 22:59:00.368411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.265 [2024-06-07 22:59:00.368477] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffff7fff cdw11:68686868 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.265 [2024-06-07 22:59:00.368495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:08.265 [2024-06-07 22:59:00.368558] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:68686868 cdw11:68686868 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.265 [2024-06-07 22:59:00.368580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:08.265 [2024-06-07 22:59:00.368649] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:91979797 cdw11:97970a0a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.265 [2024-06-07 22:59:00.368667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:08.265 #31 NEW cov: 12087 ft: 14886 corp: 14/395b lim: 40 exec/s: 31 rss: 72Mb L: 40/40 MS: 1 ChangeBit- 00:08:08.265 [2024-06-07 22:59:00.437995] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a98ffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.265 [2024-06-07 22:59:00.438028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.265 [2024-06-07 22:59:00.438096] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.265 [2024-06-07 22:59:00.438115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.265 #32 NEW cov: 12087 ft: 14912 corp: 15/415b lim: 40 exec/s: 32 rss: 72Mb L: 20/40 MS: 1 ShuffleBytes- 00:08:08.265 [2024-06-07 22:59:00.488583] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a98ffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.265 [2024-06-07 22:59:00.488620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.265 [2024-06-07 22:59:00.488688] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.265 [2024-06-07 22:59:00.488706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.265 [2024-06-07 22:59:00.488772] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:68686868 cdw11:68236868 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.265 [2024-06-07 22:59:00.488789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:08.265 [2024-06-07 22:59:00.488854] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:68686868 cdw11:91979797 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.265 [2024-06-07 22:59:00.488872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:08.265 [2024-06-07 22:59:00.488938] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:97979797 cdw11:feff0a0a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.265 [2024-06-07 22:59:00.488956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:08.265 #33 NEW cov: 12087 ft: 14929 corp: 16/455b lim: 40 exec/s: 33 rss: 72Mb L: 40/40 MS: 1 ChangeByte- 00:08:08.265 [2024-06-07 22:59:00.538658] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0b0b0bb7 cdw11:b7b7b7ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.265 [2024-06-07 22:59:00.538691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.265 [2024-06-07 22:59:00.538756] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.265 [2024-06-07 22:59:00.538774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.265 [2024-06-07 22:59:00.538843] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.265 [2024-06-07 22:59:00.538862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:08.265 [2024-06-07 22:59:00.538928] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.265 [2024-06-07 22:59:00.538946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:08.525 #34 NEW cov: 12087 ft: 14949 corp: 17/490b lim: 40 exec/s: 34 rss: 72Mb L: 35/40 MS: 1 InsertRepeatedBytes- 00:08:08.525 [2024-06-07 22:59:00.608470] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a98ff7c cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.525 [2024-06-07 22:59:00.608503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.525 [2024-06-07 22:59:00.608567] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.525 [2024-06-07 22:59:00.608591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.525 #35 NEW cov: 12087 ft: 14979 corp: 18/511b lim: 40 exec/s: 35 rss: 72Mb L: 21/40 MS: 1 InsertByte- 00:08:08.525 [2024-06-07 22:59:00.658420] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a98ffff cdw11:ffffff0a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.525 [2024-06-07 22:59:00.658458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.525 #36 NEW cov: 12087 ft: 15080 corp: 19/519b lim: 40 exec/s: 36 rss: 72Mb L: 8/40 MS: 1 EraseBytes- 00:08:08.525 [2024-06-07 22:59:00.728984] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a98ffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.525 [2024-06-07 22:59:00.729017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.525 [2024-06-07 22:59:00.729085] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.525 [2024-06-07 22:59:00.729103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.525 [2024-06-07 22:59:00.729170] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffff68 cdw11:91979797 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.525 [2024-06-07 22:59:00.729188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:08.525 #37 NEW cov: 12087 ft: 15096 corp: 20/547b lim: 40 exec/s: 37 rss: 72Mb L: 28/40 MS: 1 EraseBytes- 00:08:08.525 [2024-06-07 22:59:00.779456] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a98ffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.525 [2024-06-07 22:59:00.779489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.525 [2024-06-07 22:59:00.779555] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.525 [2024-06-07 22:59:00.779573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.525 [2024-06-07 22:59:00.779647] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:68686868 cdw11:68686868 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.525 [2024-06-07 22:59:00.779665] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:08.525 [2024-06-07 22:59:00.779731] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:68686868 cdw11:91979797 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.525 [2024-06-07 22:59:00.779749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:08.525 [2024-06-07 22:59:00.779813] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:97979797 cdw11:ffff0a0a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.525 [2024-06-07 22:59:00.779832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:08.785 #38 NEW cov: 12087 ft: 15122 corp: 21/587b lim: 40 exec/s: 38 rss: 72Mb L: 40/40 MS: 1 ShuffleBytes- 00:08:08.785 [2024-06-07 22:59:00.829582] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a98ffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.785 [2024-06-07 22:59:00.829616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.785 [2024-06-07 22:59:00.829684] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:fffeffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.785 [2024-06-07 22:59:00.829702] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.785 [2024-06-07 22:59:00.829770] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:68686868 cdw11:68686868 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.785 [2024-06-07 22:59:00.829791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:08.785 [2024-06-07 22:59:00.829860] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:68686868 cdw11:91979797 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.785 [2024-06-07 22:59:00.829878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:08.785 [2024-06-07 22:59:00.829944] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:97979797 cdw11:ffff0a0a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.785 [2024-06-07 22:59:00.829962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:08.785 #44 NEW cov: 12087 ft: 15124 corp: 22/627b lim: 40 exec/s: 44 rss: 72Mb L: 40/40 MS: 1 ChangeBit- 00:08:08.785 [2024-06-07 22:59:00.899056] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0b0b0bb7 cdw11:b7b7b70a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.785 [2024-06-07 22:59:00.899089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.785 #45 NEW cov: 12087 ft: 15138 corp: 23/639b lim: 40 exec/s: 45 rss: 72Mb L: 12/40 MS: 1 CrossOver- 00:08:08.785 [2024-06-07 22:59:00.949898] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a98ff0a cdw11:98ffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.785 [2024-06-07 22:59:00.949931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.785 [2024-06-07 22:59:00.950000] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.785 [2024-06-07 22:59:00.950018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.785 [2024-06-07 22:59:00.950083] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ff0a0a68 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.785 [2024-06-07 22:59:00.950101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:08.785 [2024-06-07 22:59:00.950167] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:68686868 cdw11:68686868 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.785 [2024-06-07 22:59:00.950184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:08.785 [2024-06-07 22:59:00.950252] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:ffff6868 cdw11:68680a0a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.785 [2024-06-07 22:59:00.950271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:08.785 #46 NEW cov: 12087 ft: 15159 corp: 24/679b lim: 40 exec/s: 46 rss: 73Mb L: 40/40 MS: 1 CrossOver- 00:08:08.785 [2024-06-07 22:59:01.019554] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a98ff7c cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.785 [2024-06-07 22:59:01.019591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.785 [2024-06-07 22:59:01.019662] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.785 [2024-06-07 22:59:01.019680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:09.044 #47 NEW cov: 12087 ft: 15184 corp: 25/700b lim: 40 exec/s: 47 rss: 73Mb L: 21/40 MS: 1 CopyPart- 00:08:09.044 [2024-06-07 22:59:01.089788] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a98ff7c cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:09.044 [2024-06-07 22:59:01.089820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:09.044 [2024-06-07 22:59:01.089890] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:09.044 [2024-06-07 22:59:01.089908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:09.044 #48 NEW cov: 12087 ft: 15189 corp: 26/721b lim: 40 exec/s: 48 rss: 73Mb L: 21/40 MS: 1 CrossOver- 00:08:09.044 [2024-06-07 22:59:01.140106] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a98ffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:09.045 [2024-06-07 22:59:01.140140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:09.045 [2024-06-07 22:59:01.140210] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffff68 cdw11:68686868 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:09.045 [2024-06-07 22:59:01.140228] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:09.045 [2024-06-07 22:59:01.140296] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:686868f2 cdw11:68686868 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:09.045 [2024-06-07 22:59:01.140314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:09.045 #49 NEW cov: 12087 ft: 15240 corp: 27/749b lim: 40 exec/s: 49 rss: 73Mb L: 28/40 MS: 1 ChangeByte- 00:08:09.045 [2024-06-07 22:59:01.210659] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a98ffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:09.045 [2024-06-07 22:59:01.210692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:09.045 [2024-06-07 22:59:01.210761] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:09.045 [2024-06-07 22:59:01.210779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:09.045 [2024-06-07 22:59:01.210844] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffff7fff cdw11:68686868 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:09.045 [2024-06-07 22:59:01.210862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:09.045 [2024-06-07 22:59:01.210930] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:68686868 cdw11:68689797 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:09.045 [2024-06-07 22:59:01.210948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:09.045 [2024-06-07 22:59:01.211012] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:97976868 cdw11:91970a0a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:09.045 [2024-06-07 22:59:01.211030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:09.045 #50 NEW cov: 12087 ft: 15252 corp: 28/789b lim: 40 exec/s: 25 rss: 73Mb L: 40/40 MS: 1 ShuffleBytes- 00:08:09.045 #50 DONE cov: 12087 ft: 15252 corp: 28/789b lim: 40 exec/s: 25 rss: 73Mb 00:08:09.045 Done 50 runs in 2 second(s) 00:08:09.304 22:59:01 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_11.conf /var/tmp/suppress_nvmf_fuzz 00:08:09.304 22:59:01 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:09.304 22:59:01 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:09.304 22:59:01 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 12 1 0x1 00:08:09.304 22:59:01 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=12 00:08:09.304 22:59:01 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:09.304 22:59:01 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:09.304 22:59:01 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:08:09.304 22:59:01 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_12.conf 00:08:09.304 22:59:01 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:09.304 22:59:01 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:09.304 22:59:01 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 12 00:08:09.304 22:59:01 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4412 00:08:09.304 22:59:01 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:08:09.304 22:59:01 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' 00:08:09.304 22:59:01 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4412"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:09.304 22:59:01 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:09.304 22:59:01 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:09.305 22:59:01 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' -c /tmp/fuzz_json_12.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 -Z 12 00:08:09.305 [2024-06-07 22:59:01.445483] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:08:09.305 [2024-06-07 22:59:01.445555] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4154668 ] 00:08:09.305 EAL: No free 2048 kB hugepages reported on node 1 00:08:09.564 [2024-06-07 22:59:01.680060] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.564 [2024-06-07 22:59:01.758831] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.564 [2024-06-07 22:59:01.821172] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:09.564 [2024-06-07 22:59:01.837551] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4412 *** 00:08:09.823 INFO: Running with entropic power schedule (0xFF, 100). 00:08:09.823 INFO: Seed: 631619482 00:08:09.823 INFO: Loaded 1 modules (357552 inline 8-bit counters): 357552 [0x29a2fcc, 0x29fa47c), 00:08:09.823 INFO: Loaded 1 PC tables (357552 PCs): 357552 [0x29fa480,0x2f6ef80), 00:08:09.823 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:08:09.823 INFO: A corpus is not provided, starting from an empty corpus 00:08:09.823 #2 INITED exec/s: 0 rss: 63Mb 00:08:09.823 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:09.823 This may also happen if the target rejected all inputs we tried so far 00:08:09.823 [2024-06-07 22:59:01.893239] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:09.823 [2024-06-07 22:59:01.893276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:09.823 [2024-06-07 22:59:01.893345] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000a61 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:09.823 [2024-06-07 22:59:01.893364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:10.083 NEW_FUNC[1/685]: 0x4937d0 in fuzz_admin_directive_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:241 00:08:10.083 NEW_FUNC[2/685]: 0x4bf580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:10.083 #9 NEW cov: 11835 ft: 11818 corp: 2/19b lim: 40 exec/s: 0 rss: 70Mb L: 18/18 MS: 2 InsertRepeatedBytes-InsertRepeatedBytes- 00:08:10.083 [2024-06-07 22:59:02.344668] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.083 [2024-06-07 22:59:02.344710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:10.083 [2024-06-07 22:59:02.344777] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.083 [2024-06-07 22:59:02.344796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:10.083 [2024-06-07 22:59:02.344860] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.083 [2024-06-07 22:59:02.344878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:10.083 [2024-06-07 22:59:02.344941] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000a0f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.083 [2024-06-07 22:59:02.344960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:10.342 NEW_FUNC[1/1]: 0x1d917c0 in spdk_thread_get_last_tsc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:1320 00:08:10.342 #18 NEW cov: 11971 ft: 12714 corp: 3/51b lim: 40 exec/s: 0 rss: 71Mb L: 32/32 MS: 4 CrossOver-CopyPart-ChangeByte-InsertRepeatedBytes- 00:08:10.342 [2024-06-07 22:59:02.404378] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.342 [2024-06-07 22:59:02.404414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:10.342 [2024-06-07 22:59:02.404480] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00010000 cdw11:00000a61 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.342 [2024-06-07 22:59:02.404499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:10.342 #19 NEW cov: 11977 ft: 13035 corp: 4/69b lim: 40 exec/s: 0 rss: 71Mb L: 18/32 MS: 1 ChangeBit- 00:08:10.342 [2024-06-07 22:59:02.474591] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:4a0ab6b6 cdw11:b6b6b6b6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.342 [2024-06-07 22:59:02.474627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:10.342 [2024-06-07 22:59:02.474694] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:b6b6b6b6 cdw11:b6b6b6b6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.343 [2024-06-07 22:59:02.474714] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:10.343 #24 NEW cov: 12062 ft: 13383 corp: 5/87b lim: 40 exec/s: 0 rss: 71Mb L: 18/32 MS: 5 CopyPart-ChangeBit-CopyPart-InsertByte-InsertRepeatedBytes- 00:08:10.343 [2024-06-07 22:59:02.525114] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:eaeaeaea cdw11:ea000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.343 [2024-06-07 22:59:02.525149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:10.343 [2024-06-07 22:59:02.525219] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.343 [2024-06-07 22:59:02.525238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:10.343 [2024-06-07 22:59:02.525302] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.343 [2024-06-07 22:59:02.525320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:10.343 [2024-06-07 22:59:02.525385] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.343 [2024-06-07 22:59:02.525404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:10.343 #30 NEW cov: 12062 ft: 13441 corp: 6/124b lim: 40 exec/s: 0 rss: 71Mb L: 37/37 MS: 1 InsertRepeatedBytes- 00:08:10.343 [2024-06-07 22:59:02.595228] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.343 [2024-06-07 22:59:02.595262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:10.343 [2024-06-07 22:59:02.595334] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.343 [2024-06-07 22:59:02.595353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:10.343 [2024-06-07 22:59:02.595419] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:17171717 cdw11:17171700 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.343 [2024-06-07 22:59:02.595437] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:10.343 [2024-06-07 22:59:02.595503] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.343 [2024-06-07 22:59:02.595521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:10.602 #31 NEW cov: 12062 ft: 13511 corp: 7/163b lim: 40 exec/s: 0 rss: 71Mb L: 39/39 MS: 1 InsertRepeatedBytes- 00:08:10.602 [2024-06-07 22:59:02.645066] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:4a0ab6b6 cdw11:b6b6b6b6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.602 [2024-06-07 22:59:02.645099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:10.602 [2024-06-07 22:59:02.645165] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:b6b6b60a cdw11:b6b6b6b6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.602 [2024-06-07 22:59:02.645184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:10.602 #32 NEW cov: 12062 ft: 13588 corp: 8/181b lim: 40 exec/s: 0 rss: 71Mb L: 18/39 MS: 1 CrossOver- 00:08:10.602 [2024-06-07 22:59:02.715552] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.602 [2024-06-07 22:59:02.715590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:10.602 [2024-06-07 22:59:02.715655] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.602 [2024-06-07 22:59:02.715674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:10.602 [2024-06-07 22:59:02.715742] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:17171717 cdw11:17171700 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.602 [2024-06-07 22:59:02.715761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:10.602 [2024-06-07 22:59:02.715825] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.602 [2024-06-07 22:59:02.715843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:10.602 NEW_FUNC[1/1]: 0x1a74020 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:601 00:08:10.602 #33 NEW cov: 12085 ft: 13611 corp: 9/220b lim: 40 exec/s: 0 rss: 72Mb L: 39/39 MS: 1 ShuffleBytes- 00:08:10.602 [2024-06-07 22:59:02.785795] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:4a0ab6b6 cdw11:b6ffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.602 [2024-06-07 22:59:02.785828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:10.602 [2024-06-07 22:59:02.785892] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.602 [2024-06-07 22:59:02.785910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:10.602 [2024-06-07 22:59:02.785973] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.602 [2024-06-07 22:59:02.785991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:10.602 [2024-06-07 22:59:02.786056] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:ffffb6b6 cdw11:b6b6b6b6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.602 [2024-06-07 22:59:02.786074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:10.602 #34 NEW cov: 12085 ft: 13637 corp: 10/259b lim: 40 exec/s: 0 rss: 72Mb L: 39/39 MS: 1 InsertRepeatedBytes- 00:08:10.602 [2024-06-07 22:59:02.835604] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.602 [2024-06-07 22:59:02.835637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:10.602 [2024-06-07 22:59:02.835703] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00009e00 cdw11:00000a61 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.602 [2024-06-07 22:59:02.835721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:10.602 #35 NEW cov: 12085 ft: 13682 corp: 11/277b lim: 40 exec/s: 0 rss: 72Mb L: 18/39 MS: 1 ChangeByte- 00:08:10.862 [2024-06-07 22:59:02.885736] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:4a0ab6b6 cdw11:b6b6b6b6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.862 [2024-06-07 22:59:02.885769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:10.862 [2024-06-07 22:59:02.885835] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:b6b6b62d cdw11:b6b6b6b6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.862 [2024-06-07 22:59:02.885854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:10.862 #36 NEW cov: 12085 ft: 13699 corp: 12/295b lim: 40 exec/s: 36 rss: 72Mb L: 18/39 MS: 1 ChangeByte- 00:08:10.862 [2024-06-07 22:59:02.955783] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.862 [2024-06-07 22:59:02.955816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:10.862 #37 NEW cov: 12085 ft: 14509 corp: 13/303b lim: 40 exec/s: 37 rss: 72Mb L: 8/39 MS: 1 CrossOver- 00:08:10.862 [2024-06-07 22:59:03.005916] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.862 [2024-06-07 22:59:03.005949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:10.862 #38 NEW cov: 12085 ft: 14519 corp: 14/313b lim: 40 exec/s: 38 rss: 72Mb L: 10/39 MS: 1 CopyPart- 00:08:10.862 [2024-06-07 22:59:03.076070] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00003e00 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.862 [2024-06-07 22:59:03.076102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:10.862 #39 NEW cov: 12085 ft: 14656 corp: 15/321b lim: 40 exec/s: 39 rss: 72Mb L: 8/39 MS: 1 ChangeByte- 00:08:10.862 [2024-06-07 22:59:03.126408] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:4a0ab6b6 cdw11:b6b6b6b6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.862 [2024-06-07 22:59:03.126441] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:10.862 [2024-06-07 22:59:03.126509] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:b6b6b6b6 cdw11:b6b6b6b6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.862 [2024-06-07 22:59:03.126527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:11.120 #40 NEW cov: 12085 ft: 14674 corp: 16/339b lim: 40 exec/s: 40 rss: 72Mb L: 18/39 MS: 1 ShuffleBytes- 00:08:11.120 [2024-06-07 22:59:03.166507] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.120 [2024-06-07 22:59:03.166542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:11.120 [2024-06-07 22:59:03.166613] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.120 [2024-06-07 22:59:03.166634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:11.120 #41 NEW cov: 12085 ft: 14681 corp: 17/361b lim: 40 exec/s: 41 rss: 72Mb L: 22/39 MS: 1 CrossOver- 00:08:11.120 [2024-06-07 22:59:03.237059] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:eaeaeaea cdw11:ea000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.120 [2024-06-07 22:59:03.237093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:11.120 [2024-06-07 22:59:03.237161] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.120 [2024-06-07 22:59:03.237180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:11.120 [2024-06-07 22:59:03.237245] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00001a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.120 [2024-06-07 22:59:03.237264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:11.120 [2024-06-07 22:59:03.237329] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.120 [2024-06-07 22:59:03.237351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:11.120 #42 NEW cov: 12085 ft: 14702 corp: 18/398b lim: 40 exec/s: 42 rss: 72Mb L: 37/39 MS: 1 ChangeByte- 00:08:11.120 [2024-06-07 22:59:03.306735] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:4a0ab6b6 cdw11:b6b6b64a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.120 [2024-06-07 22:59:03.306768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:11.120 #43 NEW cov: 12085 ft: 14787 corp: 19/408b lim: 40 exec/s: 43 rss: 72Mb L: 10/39 MS: 1 CrossOver- 00:08:11.120 [2024-06-07 22:59:03.377060] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.120 [2024-06-07 22:59:03.377093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:11.120 [2024-06-07 22:59:03.377161] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00002100 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.120 [2024-06-07 22:59:03.377180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:11.379 #44 NEW cov: 12085 ft: 14822 corp: 20/431b lim: 40 exec/s: 44 rss: 73Mb L: 23/39 MS: 1 InsertByte- 00:08:11.379 [2024-06-07 22:59:03.437254] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:4a0ab6b6 cdw11:b6ffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.379 [2024-06-07 22:59:03.437287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:11.379 [2024-06-07 22:59:03.437355] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffb6b6 cdw11:b6b6b6b6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.379 [2024-06-07 22:59:03.437374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:11.379 #45 NEW cov: 12085 ft: 14842 corp: 21/451b lim: 40 exec/s: 45 rss: 73Mb L: 20/39 MS: 1 EraseBytes- 00:08:11.379 [2024-06-07 22:59:03.507238] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.379 [2024-06-07 22:59:03.507271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:11.379 #46 NEW cov: 12085 ft: 14897 corp: 22/466b lim: 40 exec/s: 46 rss: 73Mb L: 15/39 MS: 1 EraseBytes- 00:08:11.379 [2024-06-07 22:59:03.578019] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:0000003a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.379 [2024-06-07 22:59:03.578052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:11.379 [2024-06-07 22:59:03.578119] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.379 [2024-06-07 22:59:03.578138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:11.379 [2024-06-07 22:59:03.578203] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:17171717 cdw11:17171700 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.379 [2024-06-07 22:59:03.578221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:11.379 [2024-06-07 22:59:03.578287] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.379 [2024-06-07 22:59:03.578306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:11.379 #47 NEW cov: 12085 ft: 14929 corp: 23/505b lim: 40 exec/s: 47 rss: 73Mb L: 39/39 MS: 1 ChangeByte- 00:08:11.379 [2024-06-07 22:59:03.647850] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.379 [2024-06-07 22:59:03.647883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:11.379 [2024-06-07 22:59:03.647948] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:009e0000 cdw11:9e000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.379 [2024-06-07 22:59:03.647967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:11.638 #48 NEW cov: 12085 ft: 14933 corp: 24/525b lim: 40 exec/s: 48 rss: 73Mb L: 20/39 MS: 1 CopyPart- 00:08:11.638 [2024-06-07 22:59:03.697943] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.638 [2024-06-07 22:59:03.697977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:11.638 [2024-06-07 22:59:03.698042] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:03000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.638 [2024-06-07 22:59:03.698061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:11.638 #49 NEW cov: 12085 ft: 14989 corp: 25/547b lim: 40 exec/s: 49 rss: 73Mb L: 22/39 MS: 1 ChangeBinInt- 00:08:11.638 [2024-06-07 22:59:03.747908] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:4a0a96b6 cdw11:b6b6b64a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.638 [2024-06-07 22:59:03.747942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:11.638 #50 NEW cov: 12085 ft: 15076 corp: 26/557b lim: 40 exec/s: 50 rss: 73Mb L: 10/39 MS: 1 ChangeBit- 00:08:11.638 [2024-06-07 22:59:03.818673] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0000003a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.638 [2024-06-07 22:59:03.818707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:11.638 [2024-06-07 22:59:03.818773] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.638 [2024-06-07 22:59:03.818792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:11.638 [2024-06-07 22:59:03.818859] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:17171717 cdw11:17171700 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.638 [2024-06-07 22:59:03.818877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:11.638 [2024-06-07 22:59:03.818945] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.638 [2024-06-07 22:59:03.818963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:11.638 #51 NEW cov: 12085 ft: 15106 corp: 27/596b lim: 40 exec/s: 51 rss: 73Mb L: 39/39 MS: 1 ShuffleBytes- 00:08:11.638 [2024-06-07 22:59:03.888357] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00080000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.638 [2024-06-07 22:59:03.888390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:11.898 #52 NEW cov: 12085 ft: 15113 corp: 28/604b lim: 40 exec/s: 26 rss: 73Mb L: 8/39 MS: 1 ChangeBinInt- 00:08:11.898 #52 DONE cov: 12085 ft: 15113 corp: 28/604b lim: 40 exec/s: 26 rss: 73Mb 00:08:11.898 Done 52 runs in 2 second(s) 00:08:11.898 22:59:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_12.conf /var/tmp/suppress_nvmf_fuzz 00:08:11.898 22:59:04 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:11.898 22:59:04 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:11.898 22:59:04 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 13 1 0x1 00:08:11.898 22:59:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=13 00:08:11.898 22:59:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:11.898 22:59:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:11.898 22:59:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:08:11.898 22:59:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_13.conf 00:08:11.898 22:59:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:11.898 22:59:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:11.898 22:59:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 13 00:08:11.898 22:59:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4413 00:08:11.898 22:59:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:08:11.898 22:59:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' 00:08:11.898 22:59:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4413"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:11.898 22:59:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:11.898 22:59:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:11.898 22:59:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' -c /tmp/fuzz_json_13.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 -Z 13 00:08:11.898 [2024-06-07 22:59:04.099895] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:08:11.898 [2024-06-07 22:59:04.099981] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4155198 ] 00:08:11.898 EAL: No free 2048 kB hugepages reported on node 1 00:08:12.157 [2024-06-07 22:59:04.330342] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.157 [2024-06-07 22:59:04.408857] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.417 [2024-06-07 22:59:04.471221] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:12.417 [2024-06-07 22:59:04.487611] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4413 *** 00:08:12.417 INFO: Running with entropic power schedule (0xFF, 100). 00:08:12.417 INFO: Seed: 3280614479 00:08:12.417 INFO: Loaded 1 modules (357552 inline 8-bit counters): 357552 [0x29a2fcc, 0x29fa47c), 00:08:12.417 INFO: Loaded 1 PC tables (357552 PCs): 357552 [0x29fa480,0x2f6ef80), 00:08:12.417 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:08:12.417 INFO: A corpus is not provided, starting from an empty corpus 00:08:12.417 #2 INITED exec/s: 0 rss: 63Mb 00:08:12.417 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:12.417 This may also happen if the target rejected all inputs we tried so far 00:08:12.417 [2024-06-07 22:59:04.536588] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:f8b80000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.417 [2024-06-07 22:59:04.536624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:12.417 [2024-06-07 22:59:04.536704] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.417 [2024-06-07 22:59:04.536724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:12.417 [2024-06-07 22:59:04.536795] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.417 [2024-06-07 22:59:04.536813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:12.986 NEW_FUNC[1/685]: 0x495390 in fuzz_admin_directive_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:257 00:08:12.986 NEW_FUNC[2/685]: 0x4bf580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:12.986 #6 NEW cov: 11829 ft: 11830 corp: 2/30b lim: 40 exec/s: 0 rss: 70Mb L: 29/29 MS: 4 ChangeBinInt-CopyPart-ChangeBit-InsertRepeatedBytes- 00:08:12.986 [2024-06-07 22:59:04.987783] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:12808080 cdw11:80808080 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.986 [2024-06-07 22:59:04.987825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:12.986 [2024-06-07 22:59:04.987898] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:80808080 cdw11:80808080 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.986 [2024-06-07 22:59:04.987916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:12.986 [2024-06-07 22:59:04.987984] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:80808080 cdw11:80808080 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.986 [2024-06-07 22:59:04.988002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:12.986 #10 NEW cov: 11959 ft: 12527 corp: 3/57b lim: 40 exec/s: 0 rss: 71Mb L: 27/29 MS: 4 ChangeBit-ChangeBit-ShuffleBytes-InsertRepeatedBytes- 00:08:12.986 [2024-06-07 22:59:05.037791] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:f8b80000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.986 [2024-06-07 22:59:05.037827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:12.986 [2024-06-07 22:59:05.037902] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:0000ff00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.986 [2024-06-07 22:59:05.037921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:12.986 [2024-06-07 22:59:05.037990] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.986 [2024-06-07 22:59:05.038008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:12.986 #11 NEW cov: 11965 ft: 12727 corp: 4/86b lim: 40 exec/s: 0 rss: 71Mb L: 29/29 MS: 1 ChangeBinInt- 00:08:12.986 [2024-06-07 22:59:05.108147] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:7e000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.986 [2024-06-07 22:59:05.108181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:12.986 [2024-06-07 22:59:05.108258] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.986 [2024-06-07 22:59:05.108281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:12.986 [2024-06-07 22:59:05.108353] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.986 [2024-06-07 22:59:05.108371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:12.986 [2024-06-07 22:59:05.108442] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.986 [2024-06-07 22:59:05.108459] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:12.986 #13 NEW cov: 12050 ft: 13393 corp: 5/119b lim: 40 exec/s: 0 rss: 71Mb L: 33/33 MS: 2 ChangeByte-InsertRepeatedBytes- 00:08:12.986 [2024-06-07 22:59:05.158293] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:7e000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.986 [2024-06-07 22:59:05.158327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:12.986 [2024-06-07 22:59:05.158402] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.986 [2024-06-07 22:59:05.158421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:12.986 [2024-06-07 22:59:05.158488] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.986 [2024-06-07 22:59:05.158505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:12.986 [2024-06-07 22:59:05.158573] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.986 [2024-06-07 22:59:05.158601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:12.986 #14 NEW cov: 12050 ft: 13582 corp: 6/152b lim: 40 exec/s: 0 rss: 71Mb L: 33/33 MS: 1 ShuffleBytes- 00:08:12.986 [2024-06-07 22:59:05.228478] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:7e000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.986 [2024-06-07 22:59:05.228512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:12.986 [2024-06-07 22:59:05.228585] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.986 [2024-06-07 22:59:05.228604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:12.986 [2024-06-07 22:59:05.228674] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:3b000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.986 [2024-06-07 22:59:05.228692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:12.986 [2024-06-07 22:59:05.228763] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.986 [2024-06-07 22:59:05.228781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:13.246 #15 NEW cov: 12050 ft: 13634 corp: 7/185b lim: 40 exec/s: 0 rss: 71Mb L: 33/33 MS: 1 ChangeByte- 00:08:13.246 [2024-06-07 22:59:05.298725] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:7e000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.246 [2024-06-07 22:59:05.298764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:13.246 [2024-06-07 22:59:05.298839] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.246 [2024-06-07 22:59:05.298857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:13.246 [2024-06-07 22:59:05.298927] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.246 [2024-06-07 22:59:05.298946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:13.246 [2024-06-07 22:59:05.299016] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:003b0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.246 [2024-06-07 22:59:05.299034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:13.246 #16 NEW cov: 12050 ft: 13727 corp: 8/223b lim: 40 exec/s: 0 rss: 71Mb L: 38/38 MS: 1 InsertRepeatedBytes- 00:08:13.247 [2024-06-07 22:59:05.368909] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:7e000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.247 [2024-06-07 22:59:05.368944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:13.247 [2024-06-07 22:59:05.369018] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.247 [2024-06-07 22:59:05.369037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:13.247 [2024-06-07 22:59:05.369108] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.247 [2024-06-07 22:59:05.369126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:13.247 [2024-06-07 22:59:05.369194] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:003b0000 cdw11:0000010e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.247 [2024-06-07 22:59:05.369212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:13.247 NEW_FUNC[1/1]: 0x1a74020 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:601 00:08:13.247 #17 NEW cov: 12073 ft: 13775 corp: 9/261b lim: 40 exec/s: 0 rss: 72Mb L: 38/38 MS: 1 CMP- DE: "\001\016>V\347\330\313L"- 00:08:13.247 [2024-06-07 22:59:05.439093] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:7e000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.247 [2024-06-07 22:59:05.439127] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:13.247 [2024-06-07 22:59:05.439204] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.247 [2024-06-07 22:59:05.439223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:13.247 [2024-06-07 22:59:05.439291] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.247 [2024-06-07 22:59:05.439309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:13.247 [2024-06-07 22:59:05.439383] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:003b0000 cdw11:0000010e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.247 [2024-06-07 22:59:05.439401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:13.247 #18 NEW cov: 12073 ft: 13888 corp: 10/299b lim: 40 exec/s: 0 rss: 72Mb L: 38/38 MS: 1 ChangeByte- 00:08:13.247 [2024-06-07 22:59:05.509270] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:7e0000fe cdw11:ffffff00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.247 [2024-06-07 22:59:05.509304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:13.247 [2024-06-07 22:59:05.509380] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.247 [2024-06-07 22:59:05.509399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:13.247 [2024-06-07 22:59:05.509469] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.247 [2024-06-07 22:59:05.509487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:13.247 [2024-06-07 22:59:05.509554] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.247 [2024-06-07 22:59:05.509573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:13.506 #19 NEW cov: 12073 ft: 13922 corp: 11/336b lim: 40 exec/s: 19 rss: 72Mb L: 37/38 MS: 1 CMP- DE: "\376\377\377\377"- 00:08:13.506 [2024-06-07 22:59:05.559412] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:7e000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.506 [2024-06-07 22:59:05.559446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:13.506 [2024-06-07 22:59:05.559521] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.506 [2024-06-07 22:59:05.559540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:13.506 [2024-06-07 22:59:05.559616] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.506 [2024-06-07 22:59:05.559635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:13.506 [2024-06-07 22:59:05.559703] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:003b0000 cdw11:0000010e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.506 [2024-06-07 22:59:05.559721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:13.506 #20 NEW cov: 12073 ft: 13956 corp: 12/374b lim: 40 exec/s: 20 rss: 72Mb L: 38/38 MS: 1 ShuffleBytes- 00:08:13.506 [2024-06-07 22:59:05.629490] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:12808080 cdw11:80808080 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.506 [2024-06-07 22:59:05.629523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:13.507 [2024-06-07 22:59:05.629597] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:80808080 cdw11:3d808080 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.507 [2024-06-07 22:59:05.629619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:13.507 [2024-06-07 22:59:05.629687] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:80808080 cdw11:80808080 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.507 [2024-06-07 22:59:05.629704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:13.507 #21 NEW cov: 12073 ft: 13994 corp: 13/401b lim: 40 exec/s: 21 rss: 72Mb L: 27/38 MS: 1 ChangeByte- 00:08:13.507 [2024-06-07 22:59:05.689765] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:7e000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.507 [2024-06-07 22:59:05.689798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:13.507 [2024-06-07 22:59:05.689870] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:010e3e56 cdw11:e7d8cb4c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.507 [2024-06-07 22:59:05.689889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:13.507 [2024-06-07 22:59:05.689956] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.507 [2024-06-07 22:59:05.689974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:13.507 [2024-06-07 22:59:05.690042] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:003b0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.507 [2024-06-07 22:59:05.690060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:13.507 #22 NEW cov: 12073 ft: 14042 corp: 14/439b lim: 40 exec/s: 22 rss: 72Mb L: 38/38 MS: 1 PersAutoDict- DE: "\001\016>V\347\330\313L"- 00:08:13.507 [2024-06-07 22:59:05.739770] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:12808080 cdw11:80808080 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.507 [2024-06-07 22:59:05.739803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:13.507 [2024-06-07 22:59:05.739876] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:80808080 cdw11:80808080 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.507 [2024-06-07 22:59:05.739894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:13.507 [2024-06-07 22:59:05.739965] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:8080feff cdw11:ffff8080 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.507 [2024-06-07 22:59:05.739983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:13.507 #23 NEW cov: 12073 ft: 14063 corp: 15/466b lim: 40 exec/s: 23 rss: 72Mb L: 27/38 MS: 1 PersAutoDict- DE: "\376\377\377\377"- 00:08:13.766 [2024-06-07 22:59:05.790069] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:7e000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.766 [2024-06-07 22:59:05.790103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:13.766 [2024-06-07 22:59:05.790176] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.766 [2024-06-07 22:59:05.790194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:13.766 [2024-06-07 22:59:05.790262] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:17000000 cdw11:003b0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.766 [2024-06-07 22:59:05.790284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:13.766 [2024-06-07 22:59:05.790356] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.766 [2024-06-07 22:59:05.790375] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:13.766 #24 NEW cov: 12073 ft: 14068 corp: 16/500b lim: 40 exec/s: 24 rss: 72Mb L: 34/38 MS: 1 InsertByte- 00:08:13.766 [2024-06-07 22:59:05.840078] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:12800500 cdw11:00008080 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.766 [2024-06-07 22:59:05.840111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:13.766 [2024-06-07 22:59:05.840185] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:80808080 cdw11:80808080 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.766 [2024-06-07 22:59:05.840203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:13.766 [2024-06-07 22:59:05.840273] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:80808080 cdw11:80808080 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.766 [2024-06-07 22:59:05.840291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:13.766 #25 NEW cov: 12073 ft: 14081 corp: 17/527b lim: 40 exec/s: 25 rss: 72Mb L: 27/38 MS: 1 CMP- DE: "\005\000\000\000"- 00:08:13.766 [2024-06-07 22:59:05.880352] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:7e000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.766 [2024-06-07 22:59:05.880386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:13.766 [2024-06-07 22:59:05.880462] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.766 [2024-06-07 22:59:05.880481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:13.766 [2024-06-07 22:59:05.880549] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.766 [2024-06-07 22:59:05.880566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:13.766 [2024-06-07 22:59:05.880640] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:003b0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.766 [2024-06-07 22:59:05.880658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:13.766 #26 NEW cov: 12073 ft: 14090 corp: 18/565b lim: 40 exec/s: 26 rss: 72Mb L: 38/38 MS: 1 ShuffleBytes- 00:08:13.766 [2024-06-07 22:59:05.930421] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:7e000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.766 [2024-06-07 22:59:05.930454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:13.767 [2024-06-07 22:59:05.930529] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.767 [2024-06-07 22:59:05.930548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:13.767 [2024-06-07 22:59:05.930627] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.767 [2024-06-07 22:59:05.930645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:13.767 [2024-06-07 22:59:05.930713] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:003b0000 cdw11:00002600 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.767 [2024-06-07 22:59:05.930731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:13.767 #27 NEW cov: 12073 ft: 14107 corp: 19/603b lim: 40 exec/s: 27 rss: 72Mb L: 38/38 MS: 1 ChangeBinInt- 00:08:13.767 [2024-06-07 22:59:06.000505] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:f8b80000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.767 [2024-06-07 22:59:06.000538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:13.767 [2024-06-07 22:59:06.000612] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:0000ff00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.767 [2024-06-07 22:59:06.000631] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:13.767 [2024-06-07 22:59:06.000698] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.767 [2024-06-07 22:59:06.000715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:14.026 #28 NEW cov: 12073 ft: 14133 corp: 20/628b lim: 40 exec/s: 28 rss: 72Mb L: 25/38 MS: 1 EraseBytes- 00:08:14.026 [2024-06-07 22:59:06.070904] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:7e000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.027 [2024-06-07 22:59:06.070937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:14.027 [2024-06-07 22:59:06.071010] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.027 [2024-06-07 22:59:06.071028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:14.027 [2024-06-07 22:59:06.071097] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.027 [2024-06-07 22:59:06.071115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:14.027 [2024-06-07 22:59:06.071184] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:003b0000 cdw11:0000010e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.027 [2024-06-07 22:59:06.071203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:14.027 #29 NEW cov: 12073 ft: 14189 corp: 21/666b lim: 40 exec/s: 29 rss: 72Mb L: 38/38 MS: 1 ChangeByte- 00:08:14.027 [2024-06-07 22:59:06.121042] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:7e000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.027 [2024-06-07 22:59:06.121074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:14.027 [2024-06-07 22:59:06.121147] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:010e3e56 cdw11:e7d8cb4c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.027 [2024-06-07 22:59:06.121165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:14.027 [2024-06-07 22:59:06.121245] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00002000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.027 [2024-06-07 22:59:06.121263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:14.027 [2024-06-07 22:59:06.121331] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:003b0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.027 [2024-06-07 22:59:06.121349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:14.027 #30 NEW cov: 12073 ft: 14205 corp: 22/704b lim: 40 exec/s: 30 rss: 72Mb L: 38/38 MS: 1 ChangeBit- 00:08:14.027 [2024-06-07 22:59:06.191208] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:7e000000 cdw11:0000005a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.027 [2024-06-07 22:59:06.191240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:14.027 [2024-06-07 22:59:06.191313] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.027 [2024-06-07 22:59:06.191332] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:14.027 [2024-06-07 22:59:06.191405] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.027 [2024-06-07 22:59:06.191423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:14.027 [2024-06-07 22:59:06.191496] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.027 [2024-06-07 22:59:06.191514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:14.027 #31 NEW cov: 12073 ft: 14211 corp: 23/738b lim: 40 exec/s: 31 rss: 72Mb L: 34/38 MS: 1 InsertByte- 00:08:14.027 [2024-06-07 22:59:06.241184] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:12808080 cdw11:80808080 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.027 [2024-06-07 22:59:06.241218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:14.027 [2024-06-07 22:59:06.241289] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:80808080 cdw11:80808080 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.027 [2024-06-07 22:59:06.241307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:14.027 [2024-06-07 22:59:06.241377] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:80808080 cdw11:80110000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.027 [2024-06-07 22:59:06.241394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:14.027 #32 NEW cov: 12073 ft: 14226 corp: 24/769b lim: 40 exec/s: 32 rss: 73Mb L: 31/38 MS: 1 CMP- DE: "\021\000\000\000"- 00:08:14.027 [2024-06-07 22:59:06.281490] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:7e0000fe cdw11:ffffff00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.027 [2024-06-07 22:59:06.281523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:14.027 [2024-06-07 22:59:06.281600] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.027 [2024-06-07 22:59:06.281623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:14.027 [2024-06-07 22:59:06.281696] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.027 [2024-06-07 22:59:06.281714] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:14.027 [2024-06-07 22:59:06.281783] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.027 [2024-06-07 22:59:06.281801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:14.286 #33 NEW cov: 12073 ft: 14322 corp: 25/806b lim: 40 exec/s: 33 rss: 73Mb L: 37/38 MS: 1 ChangeByte- 00:08:14.286 [2024-06-07 22:59:06.351716] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:7e000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.286 [2024-06-07 22:59:06.351749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:14.286 [2024-06-07 22:59:06.351825] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:010e3e56 cdw11:e7d8cb4c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.286 [2024-06-07 22:59:06.351843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:14.286 [2024-06-07 22:59:06.351912] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00002000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.286 [2024-06-07 22:59:06.351930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:14.286 [2024-06-07 22:59:06.351999] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:32003b00 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.286 [2024-06-07 22:59:06.352017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:14.286 #34 NEW cov: 12073 ft: 14382 corp: 26/845b lim: 40 exec/s: 34 rss: 73Mb L: 39/39 MS: 1 InsertByte- 00:08:14.286 [2024-06-07 22:59:06.421757] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:12800500 cdw11:00110000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.286 [2024-06-07 22:59:06.421789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:14.286 [2024-06-07 22:59:06.421864] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00008080 cdw11:80808080 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.286 [2024-06-07 22:59:06.421883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:14.286 [2024-06-07 22:59:06.421954] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:80808080 cdw11:80808080 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.286 [2024-06-07 22:59:06.421973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:14.286 #35 NEW cov: 12073 ft: 14432 corp: 27/876b lim: 40 exec/s: 35 rss: 73Mb L: 31/39 MS: 1 PersAutoDict- DE: "\021\000\000\000"- 00:08:14.286 [2024-06-07 22:59:06.492142] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:7e000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.286 [2024-06-07 22:59:06.492175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:14.287 [2024-06-07 22:59:06.492247] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.287 [2024-06-07 22:59:06.492269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:14.287 [2024-06-07 22:59:06.492338] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.287 [2024-06-07 22:59:06.492356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:14.287 [2024-06-07 22:59:06.492427] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:003b0000 cdw11:00002600 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.287 [2024-06-07 22:59:06.492445] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:14.287 #36 NEW cov: 12073 ft: 14458 corp: 28/915b lim: 40 exec/s: 18 rss: 73Mb L: 39/39 MS: 1 CopyPart- 00:08:14.287 #36 DONE cov: 12073 ft: 14458 corp: 28/915b lim: 40 exec/s: 18 rss: 73Mb 00:08:14.287 ###### Recommended dictionary. ###### 00:08:14.287 "\001\016>V\347\330\313L" # Uses: 1 00:08:14.287 "\376\377\377\377" # Uses: 1 00:08:14.287 "\005\000\000\000" # Uses: 0 00:08:14.287 "\021\000\000\000" # Uses: 1 00:08:14.287 ###### End of recommended dictionary. ###### 00:08:14.287 Done 36 runs in 2 second(s) 00:08:14.546 22:59:06 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_13.conf /var/tmp/suppress_nvmf_fuzz 00:08:14.546 22:59:06 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:14.546 22:59:06 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:14.546 22:59:06 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 14 1 0x1 00:08:14.546 22:59:06 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=14 00:08:14.546 22:59:06 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:14.546 22:59:06 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:14.546 22:59:06 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:08:14.546 22:59:06 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_14.conf 00:08:14.546 22:59:06 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:14.546 22:59:06 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:14.546 22:59:06 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 14 00:08:14.546 22:59:06 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4414 00:08:14.546 22:59:06 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:08:14.546 22:59:06 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' 00:08:14.546 22:59:06 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4414"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:14.546 22:59:06 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:14.546 22:59:06 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:14.546 22:59:06 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' -c /tmp/fuzz_json_14.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 -Z 14 00:08:14.546 [2024-06-07 22:59:06.724234] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:08:14.546 [2024-06-07 22:59:06.724310] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4155554 ] 00:08:14.546 EAL: No free 2048 kB hugepages reported on node 1 00:08:14.805 [2024-06-07 22:59:06.960584] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.805 [2024-06-07 22:59:07.042156] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.064 [2024-06-07 22:59:07.104545] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:15.064 [2024-06-07 22:59:07.120918] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4414 *** 00:08:15.064 INFO: Running with entropic power schedule (0xFF, 100). 00:08:15.064 INFO: Seed: 1621646308 00:08:15.064 INFO: Loaded 1 modules (357552 inline 8-bit counters): 357552 [0x29a2fcc, 0x29fa47c), 00:08:15.064 INFO: Loaded 1 PC tables (357552 PCs): 357552 [0x29fa480,0x2f6ef80), 00:08:15.064 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:08:15.064 INFO: A corpus is not provided, starting from an empty corpus 00:08:15.064 #2 INITED exec/s: 0 rss: 63Mb 00:08:15.064 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:15.064 This may also happen if the target rejected all inputs we tried so far 00:08:15.064 [2024-06-07 22:59:07.186387] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000086 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.064 [2024-06-07 22:59:07.186425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:15.632 NEW_FUNC[1/686]: 0x496f50 in fuzz_admin_set_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:392 00:08:15.632 NEW_FUNC[2/686]: 0x4bf580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:15.632 #3 NEW cov: 11823 ft: 11824 corp: 2/10b lim: 35 exec/s: 0 rss: 70Mb L: 9/9 MS: 1 InsertRepeatedBytes- 00:08:15.632 [2024-06-07 22:59:07.637653] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000086 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.632 [2024-06-07 22:59:07.637715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:15.632 #4 NEW cov: 11953 ft: 12322 corp: 3/19b lim: 35 exec/s: 0 rss: 71Mb L: 9/9 MS: 1 CopyPart- 00:08:15.632 [2024-06-07 22:59:07.708140] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000086 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.632 [2024-06-07 22:59:07.708179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:15.632 [2024-06-07 22:59:07.708249] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.632 [2024-06-07 22:59:07.708270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:15.632 [2024-06-07 22:59:07.708333] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.632 [2024-06-07 22:59:07.708354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:15.632 [2024-06-07 22:59:07.708421] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.632 [2024-06-07 22:59:07.708442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:15.632 #10 NEW cov: 11959 ft: 13343 corp: 4/52b lim: 35 exec/s: 0 rss: 71Mb L: 33/33 MS: 1 InsertRepeatedBytes- 00:08:15.632 [2024-06-07 22:59:07.757746] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000086 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.632 [2024-06-07 22:59:07.757779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:15.632 #21 NEW cov: 12044 ft: 13689 corp: 5/60b lim: 35 exec/s: 0 rss: 71Mb L: 8/33 MS: 1 EraseBytes- 00:08:15.632 [2024-06-07 22:59:07.807915] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000086 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.632 [2024-06-07 22:59:07.807954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:15.632 #22 NEW cov: 12044 ft: 13738 corp: 6/69b lim: 35 exec/s: 0 rss: 71Mb L: 9/33 MS: 1 CopyPart- 00:08:15.632 [2024-06-07 22:59:07.847990] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000086 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.632 [2024-06-07 22:59:07.848023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:15.632 #23 NEW cov: 12044 ft: 13854 corp: 7/79b lim: 35 exec/s: 0 rss: 71Mb L: 10/33 MS: 1 CopyPart- 00:08:15.632 [2024-06-07 22:59:07.908704] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000079 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.632 [2024-06-07 22:59:07.908736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:15.632 [2024-06-07 22:59:07.908801] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000079 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.632 [2024-06-07 22:59:07.908820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:15.632 [2024-06-07 22:59:07.908886] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000079 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.632 [2024-06-07 22:59:07.908905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:15.632 [2024-06-07 22:59:07.908969] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000079 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.632 [2024-06-07 22:59:07.908987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:15.892 #24 NEW cov: 12051 ft: 14123 corp: 8/113b lim: 35 exec/s: 0 rss: 71Mb L: 34/34 MS: 1 InsertRepeatedBytes- 00:08:15.892 [2024-06-07 22:59:07.968527] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000086 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.892 [2024-06-07 22:59:07.968563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:15.892 [2024-06-07 22:59:07.968635] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.892 [2024-06-07 22:59:07.968656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:15.892 #25 NEW cov: 12051 ft: 14387 corp: 9/131b lim: 35 exec/s: 0 rss: 71Mb L: 18/34 MS: 1 EraseBytes- 00:08:15.892 [2024-06-07 22:59:08.038519] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000086 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.892 [2024-06-07 22:59:08.038551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:15.892 NEW_FUNC[1/1]: 0x1a74020 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:601 00:08:15.892 #26 NEW cov: 12074 ft: 14434 corp: 10/141b lim: 35 exec/s: 0 rss: 72Mb L: 10/34 MS: 1 ShuffleBytes- 00:08:15.892 [2024-06-07 22:59:08.109100] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.892 [2024-06-07 22:59:08.109137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:15.892 [2024-06-07 22:59:08.109204] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.892 [2024-06-07 22:59:08.109225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:15.892 [2024-06-07 22:59:08.109291] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.892 [2024-06-07 22:59:08.109316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:15.892 #31 NEW cov: 12074 ft: 14666 corp: 11/162b lim: 35 exec/s: 0 rss: 72Mb L: 21/34 MS: 5 EraseBytes-ChangeBit-ChangeBit-InsertByte-InsertRepeatedBytes- 00:08:16.151 [2024-06-07 22:59:08.178934] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000086 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.151 [2024-06-07 22:59:08.178970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:16.151 #32 NEW cov: 12074 ft: 14745 corp: 12/170b lim: 35 exec/s: 32 rss: 72Mb L: 8/34 MS: 1 ChangeByte- 00:08:16.151 [2024-06-07 22:59:08.249614] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000086 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.151 [2024-06-07 22:59:08.249650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:16.151 [2024-06-07 22:59:08.249720] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000086 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.151 [2024-06-07 22:59:08.249738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:16.151 [2024-06-07 22:59:08.249806] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000034 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.151 [2024-06-07 22:59:08.249824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:16.151 [2024-06-07 22:59:08.249892] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000034 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.151 [2024-06-07 22:59:08.249911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:16.151 #33 NEW cov: 12074 ft: 14766 corp: 13/201b lim: 35 exec/s: 33 rss: 72Mb L: 31/34 MS: 1 InsertRepeatedBytes- 00:08:16.151 [2024-06-07 22:59:08.299266] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000086 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.151 [2024-06-07 22:59:08.299301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:16.151 #34 NEW cov: 12074 ft: 14792 corp: 14/210b lim: 35 exec/s: 34 rss: 72Mb L: 9/34 MS: 1 InsertByte- 00:08:16.151 [2024-06-07 22:59:08.349381] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000086 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.151 [2024-06-07 22:59:08.349417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:16.152 #35 NEW cov: 12074 ft: 14842 corp: 15/217b lim: 35 exec/s: 35 rss: 72Mb L: 7/34 MS: 1 CrossOver- 00:08:16.152 [2024-06-07 22:59:08.390243] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000079 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.152 [2024-06-07 22:59:08.390276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:16.152 [2024-06-07 22:59:08.390345] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000079 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.152 [2024-06-07 22:59:08.390362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:16.152 [2024-06-07 22:59:08.390428] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000079 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.152 [2024-06-07 22:59:08.390446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:16.152 [2024-06-07 22:59:08.390518] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:80000079 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.152 [2024-06-07 22:59:08.390540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:16.152 [2024-06-07 22:59:08.390606] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:00000079 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.152 [2024-06-07 22:59:08.390624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:16.411 #36 NEW cov: 12074 ft: 14932 corp: 16/252b lim: 35 exec/s: 36 rss: 72Mb L: 35/35 MS: 1 InsertByte- 00:08:16.411 [2024-06-07 22:59:08.460239] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000086 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.411 [2024-06-07 22:59:08.460275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:16.411 [2024-06-07 22:59:08.460343] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.411 [2024-06-07 22:59:08.460364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:16.411 [2024-06-07 22:59:08.460429] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.411 [2024-06-07 22:59:08.460450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:16.411 [2024-06-07 22:59:08.460516] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.411 [2024-06-07 22:59:08.460537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:16.411 #37 NEW cov: 12074 ft: 14939 corp: 17/286b lim: 35 exec/s: 37 rss: 72Mb L: 34/35 MS: 1 CopyPart- 00:08:16.411 [2024-06-07 22:59:08.510153] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.411 [2024-06-07 22:59:08.510186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:16.411 [2024-06-07 22:59:08.510255] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.411 [2024-06-07 22:59:08.510275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:16.411 [2024-06-07 22:59:08.510340] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.411 [2024-06-07 22:59:08.510361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:16.411 #38 NEW cov: 12074 ft: 14958 corp: 18/307b lim: 35 exec/s: 38 rss: 72Mb L: 21/35 MS: 1 ShuffleBytes- 00:08:16.411 [2024-06-07 22:59:08.580186] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000086 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.411 [2024-06-07 22:59:08.580221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:16.411 [2024-06-07 22:59:08.580286] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.411 [2024-06-07 22:59:08.580307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:16.411 #39 NEW cov: 12074 ft: 14962 corp: 19/326b lim: 35 exec/s: 39 rss: 72Mb L: 19/35 MS: 1 InsertByte- 00:08:16.411 [2024-06-07 22:59:08.650729] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.411 [2024-06-07 22:59:08.650767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:16.411 [2024-06-07 22:59:08.650832] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:0000003b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.411 [2024-06-07 22:59:08.650851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:16.411 [2024-06-07 22:59:08.650919] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:0000003b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.411 [2024-06-07 22:59:08.650937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:16.411 [2024-06-07 22:59:08.651002] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.411 [2024-06-07 22:59:08.651023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:16.671 #40 NEW cov: 12074 ft: 14982 corp: 20/357b lim: 35 exec/s: 40 rss: 72Mb L: 31/35 MS: 1 InsertRepeatedBytes- 00:08:16.671 [2024-06-07 22:59:08.720596] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000086 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.671 [2024-06-07 22:59:08.720631] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:16.671 [2024-06-07 22:59:08.720698] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.671 [2024-06-07 22:59:08.720719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:16.671 #41 NEW cov: 12074 ft: 14983 corp: 21/375b lim: 35 exec/s: 41 rss: 72Mb L: 18/35 MS: 1 ChangeByte- 00:08:16.671 [2024-06-07 22:59:08.770555] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000086 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.671 [2024-06-07 22:59:08.770598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:16.671 #42 NEW cov: 12074 ft: 15075 corp: 22/384b lim: 35 exec/s: 42 rss: 73Mb L: 9/35 MS: 1 ChangeByte- 00:08:16.671 [2024-06-07 22:59:08.830740] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000086 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.671 [2024-06-07 22:59:08.830774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:16.671 #43 NEW cov: 12074 ft: 15090 corp: 23/392b lim: 35 exec/s: 43 rss: 73Mb L: 8/35 MS: 1 CrossOver- 00:08:16.671 [2024-06-07 22:59:08.890916] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000086 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.671 [2024-06-07 22:59:08.890951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:16.671 #44 NEW cov: 12074 ft: 15101 corp: 24/403b lim: 35 exec/s: 44 rss: 73Mb L: 11/35 MS: 1 InsertByte- 00:08:16.930 [2024-06-07 22:59:08.951055] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000086 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.930 [2024-06-07 22:59:08.951089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:16.930 #45 NEW cov: 12074 ft: 15177 corp: 25/416b lim: 35 exec/s: 45 rss: 73Mb L: 13/35 MS: 1 EraseBytes- 00:08:16.930 [2024-06-07 22:59:09.011723] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000086 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.930 [2024-06-07 22:59:09.011758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:16.930 [2024-06-07 22:59:09.011831] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.930 [2024-06-07 22:59:09.011852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:16.930 [2024-06-07 22:59:09.011920] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.930 [2024-06-07 22:59:09.011941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:16.930 [2024-06-07 22:59:09.012006] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.930 [2024-06-07 22:59:09.012027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:16.930 #46 NEW cov: 12074 ft: 15192 corp: 26/450b lim: 35 exec/s: 46 rss: 73Mb L: 34/35 MS: 1 CopyPart- 00:08:16.930 [2024-06-07 22:59:09.062078] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000079 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.930 [2024-06-07 22:59:09.062111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:16.930 [2024-06-07 22:59:09.062179] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000079 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.930 [2024-06-07 22:59:09.062197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:16.930 [2024-06-07 22:59:09.062263] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000079 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.930 [2024-06-07 22:59:09.062280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:16.930 [2024-06-07 22:59:09.062346] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:80000079 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.930 [2024-06-07 22:59:09.062367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:16.930 [2024-06-07 22:59:09.062430] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:00000079 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.930 [2024-06-07 22:59:09.062448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:16.930 #47 NEW cov: 12074 ft: 15205 corp: 27/485b lim: 35 exec/s: 47 rss: 73Mb L: 35/35 MS: 1 CopyPart- 00:08:16.930 [2024-06-07 22:59:09.132066] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000079 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.930 [2024-06-07 22:59:09.132098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:16.930 [2024-06-07 22:59:09.132166] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000079 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.930 [2024-06-07 22:59:09.132185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:16.930 [2024-06-07 22:59:09.132253] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000079 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.930 [2024-06-07 22:59:09.132271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:16.930 [2024-06-07 22:59:09.132337] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000079 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.930 [2024-06-07 22:59:09.132356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:16.930 #48 NEW cov: 12074 ft: 15222 corp: 28/519b lim: 35 exec/s: 48 rss: 73Mb L: 34/35 MS: 1 ChangeBinInt- 00:08:16.930 [2024-06-07 22:59:09.182031] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000d7 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.930 [2024-06-07 22:59:09.182066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:16.930 [2024-06-07 22:59:09.182135] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000d7 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.930 [2024-06-07 22:59:09.182155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:16.930 [2024-06-07 22:59:09.182222] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000d7 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.930 [2024-06-07 22:59:09.182242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:17.190 #49 NEW cov: 12074 ft: 15226 corp: 29/544b lim: 35 exec/s: 24 rss: 73Mb L: 25/35 MS: 1 InsertRepeatedBytes- 00:08:17.190 #49 DONE cov: 12074 ft: 15226 corp: 29/544b lim: 35 exec/s: 24 rss: 73Mb 00:08:17.190 Done 49 runs in 2 second(s) 00:08:17.190 22:59:09 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_14.conf /var/tmp/suppress_nvmf_fuzz 00:08:17.190 22:59:09 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:17.190 22:59:09 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:17.190 22:59:09 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 15 1 0x1 00:08:17.190 22:59:09 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=15 00:08:17.190 22:59:09 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:17.190 22:59:09 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:17.190 22:59:09 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:08:17.190 22:59:09 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_15.conf 00:08:17.190 22:59:09 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:17.190 22:59:09 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:17.190 22:59:09 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 15 00:08:17.190 22:59:09 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4415 00:08:17.190 22:59:09 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:08:17.190 22:59:09 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' 00:08:17.190 22:59:09 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4415"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:17.190 22:59:09 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:17.190 22:59:09 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:17.190 22:59:09 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' -c /tmp/fuzz_json_15.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 -Z 15 00:08:17.190 [2024-06-07 22:59:09.414266] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:08:17.190 [2024-06-07 22:59:09.414340] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4156025 ] 00:08:17.190 EAL: No free 2048 kB hugepages reported on node 1 00:08:17.450 [2024-06-07 22:59:09.650786] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.709 [2024-06-07 22:59:09.730227] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.709 [2024-06-07 22:59:09.794205] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:17.709 [2024-06-07 22:59:09.810584] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4415 *** 00:08:17.709 INFO: Running with entropic power schedule (0xFF, 100). 00:08:17.709 INFO: Seed: 13684533 00:08:17.709 INFO: Loaded 1 modules (357552 inline 8-bit counters): 357552 [0x29a2fcc, 0x29fa47c), 00:08:17.709 INFO: Loaded 1 PC tables (357552 PCs): 357552 [0x29fa480,0x2f6ef80), 00:08:17.709 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:08:17.709 INFO: A corpus is not provided, starting from an empty corpus 00:08:17.709 #2 INITED exec/s: 0 rss: 63Mb 00:08:17.709 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:17.709 This may also happen if the target rejected all inputs we tried so far 00:08:17.709 [2024-06-07 22:59:09.880842] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.709 [2024-06-07 22:59:09.880888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:17.709 [2024-06-07 22:59:09.881025] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.709 [2024-06-07 22:59:09.881046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:17.709 [2024-06-07 22:59:09.881185] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.709 [2024-06-07 22:59:09.881209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:18.278 NEW_FUNC[1/686]: 0x498490 in fuzz_admin_get_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:460 00:08:18.278 NEW_FUNC[2/686]: 0x4b8410 in feat_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:340 00:08:18.278 #16 NEW cov: 11825 ft: 11823 corp: 2/33b lim: 35 exec/s: 0 rss: 70Mb L: 32/32 MS: 4 CopyPart-ShuffleBytes-CrossOver-InsertRepeatedBytes- 00:08:18.278 [2024-06-07 22:59:10.352149] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.278 [2024-06-07 22:59:10.352195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:18.278 [2024-06-07 22:59:10.352323] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.278 [2024-06-07 22:59:10.352345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:18.278 [2024-06-07 22:59:10.352479] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.278 [2024-06-07 22:59:10.352501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:18.278 #17 NEW cov: 11955 ft: 12390 corp: 3/65b lim: 35 exec/s: 0 rss: 71Mb L: 32/32 MS: 1 CopyPart- 00:08:18.278 [2024-06-07 22:59:10.431967] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.278 [2024-06-07 22:59:10.432002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:18.278 [2024-06-07 22:59:10.432139] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.278 [2024-06-07 22:59:10.432163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:18.278 NEW_FUNC[1/1]: 0x4b18e0 in feat_arbitration /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:273 00:08:18.278 #20 NEW cov: 11999 ft: 13335 corp: 4/92b lim: 35 exec/s: 0 rss: 71Mb L: 27/32 MS: 3 CMP-EraseBytes-InsertRepeatedBytes- DE: "\001\000\000\001"- 00:08:18.278 [2024-06-07 22:59:10.491934] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.278 [2024-06-07 22:59:10.491969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:18.278 #21 NEW cov: 12084 ft: 13917 corp: 5/109b lim: 35 exec/s: 0 rss: 71Mb L: 17/32 MS: 1 EraseBytes- 00:08:18.538 [2024-06-07 22:59:10.572681] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.538 [2024-06-07 22:59:10.572715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:18.538 [2024-06-07 22:59:10.572846] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.538 [2024-06-07 22:59:10.572870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:18.538 [2024-06-07 22:59:10.573013] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.538 [2024-06-07 22:59:10.573033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:18.538 #24 NEW cov: 12084 ft: 13981 corp: 6/142b lim: 35 exec/s: 0 rss: 71Mb L: 33/33 MS: 3 CopyPart-CopyPart-InsertRepeatedBytes- 00:08:18.538 [2024-06-07 22:59:10.632873] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.538 [2024-06-07 22:59:10.632909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:18.538 [2024-06-07 22:59:10.633048] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.538 [2024-06-07 22:59:10.633071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:18.538 [2024-06-07 22:59:10.633206] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.538 [2024-06-07 22:59:10.633226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:18.538 #25 NEW cov: 12084 ft: 14114 corp: 7/174b lim: 35 exec/s: 0 rss: 71Mb L: 32/33 MS: 1 CopyPart- 00:08:18.538 [2024-06-07 22:59:10.693109] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.538 [2024-06-07 22:59:10.693145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:18.538 [2024-06-07 22:59:10.693286] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.538 [2024-06-07 22:59:10.693307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:18.538 [2024-06-07 22:59:10.693442] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.538 [2024-06-07 22:59:10.693465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:18.538 #26 NEW cov: 12084 ft: 14199 corp: 8/206b lim: 35 exec/s: 0 rss: 71Mb L: 32/33 MS: 1 ChangeByte- 00:08:18.538 [2024-06-07 22:59:10.753247] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.538 [2024-06-07 22:59:10.753282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:18.538 [2024-06-07 22:59:10.753423] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.538 [2024-06-07 22:59:10.753445] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:18.538 [2024-06-07 22:59:10.753581] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.538 [2024-06-07 22:59:10.753602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:18.538 NEW_FUNC[1/1]: 0x1a74020 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:601 00:08:18.538 #27 NEW cov: 12107 ft: 14265 corp: 9/238b lim: 35 exec/s: 0 rss: 71Mb L: 32/33 MS: 1 ShuffleBytes- 00:08:18.798 [2024-06-07 22:59:10.833174] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.798 [2024-06-07 22:59:10.833211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:18.798 [2024-06-07 22:59:10.833345] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.798 [2024-06-07 22:59:10.833368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:18.798 #28 NEW cov: 12107 ft: 14288 corp: 10/265b lim: 35 exec/s: 28 rss: 71Mb L: 27/33 MS: 1 ShuffleBytes- 00:08:18.798 [2024-06-07 22:59:10.893671] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.798 [2024-06-07 22:59:10.893706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:18.798 [2024-06-07 22:59:10.893842] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.798 [2024-06-07 22:59:10.893865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:18.798 [2024-06-07 22:59:10.893994] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.798 [2024-06-07 22:59:10.894015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:18.798 #29 NEW cov: 12107 ft: 14323 corp: 11/297b lim: 35 exec/s: 29 rss: 71Mb L: 32/33 MS: 1 ShuffleBytes- 00:08:18.798 [2024-06-07 22:59:10.973938] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.798 [2024-06-07 22:59:10.973973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:18.798 [2024-06-07 22:59:10.974109] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.798 [2024-06-07 22:59:10.974130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:18.798 [2024-06-07 22:59:10.974269] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.798 [2024-06-07 22:59:10.974291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:18.798 #30 NEW cov: 12107 ft: 14346 corp: 12/329b lim: 35 exec/s: 30 rss: 72Mb L: 32/33 MS: 1 PersAutoDict- DE: "\001\000\000\001"- 00:08:18.798 [2024-06-07 22:59:11.053917] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.798 [2024-06-07 22:59:11.053951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:18.798 [2024-06-07 22:59:11.054092] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.798 [2024-06-07 22:59:11.054117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:19.057 #31 NEW cov: 12107 ft: 14361 corp: 13/355b lim: 35 exec/s: 31 rss: 72Mb L: 26/33 MS: 1 EraseBytes- 00:08:19.057 [2024-06-07 22:59:11.134179] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.057 [2024-06-07 22:59:11.134215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:19.057 [2024-06-07 22:59:11.134352] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.057 [2024-06-07 22:59:11.134373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:19.057 #32 NEW cov: 12107 ft: 14379 corp: 14/382b lim: 35 exec/s: 32 rss: 72Mb L: 27/33 MS: 1 CopyPart- 00:08:19.057 [2024-06-07 22:59:11.194328] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.057 [2024-06-07 22:59:11.194362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:19.057 [2024-06-07 22:59:11.194498] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.057 [2024-06-07 22:59:11.194519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:19.057 #33 NEW cov: 12107 ft: 14384 corp: 15/409b lim: 35 exec/s: 33 rss: 72Mb L: 27/33 MS: 1 ChangeByte- 00:08:19.057 [2024-06-07 22:59:11.274858] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.057 [2024-06-07 22:59:11.274893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:19.057 [2024-06-07 22:59:11.275032] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.057 [2024-06-07 22:59:11.275053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:19.058 [2024-06-07 22:59:11.275195] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.058 [2024-06-07 22:59:11.275214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:19.058 #34 NEW cov: 12107 ft: 14393 corp: 16/441b lim: 35 exec/s: 34 rss: 72Mb L: 32/33 MS: 1 ShuffleBytes- 00:08:19.317 [2024-06-07 22:59:11.334827] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.317 [2024-06-07 22:59:11.334862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:19.317 [2024-06-07 22:59:11.334995] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.317 [2024-06-07 22:59:11.335018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:19.317 #35 NEW cov: 12107 ft: 14399 corp: 17/468b lim: 35 exec/s: 35 rss: 72Mb L: 27/33 MS: 1 InsertByte- 00:08:19.317 [2024-06-07 22:59:11.415061] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NON OPERATIONAL POWER STATE CONFIG cid:5 cdw10:00000011 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.317 [2024-06-07 22:59:11.415098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:19.317 [2024-06-07 22:59:11.415244] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.317 [2024-06-07 22:59:11.415265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:19.317 #36 NEW cov: 12107 ft: 14416 corp: 18/495b lim: 35 exec/s: 36 rss: 72Mb L: 27/33 MS: 1 CMP- DE: "\000.Yv\021\000\000\000"- 00:08:19.317 [2024-06-07 22:59:11.475454] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.317 [2024-06-07 22:59:11.475489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:19.317 [2024-06-07 22:59:11.475632] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.317 [2024-06-07 22:59:11.475655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:19.317 [2024-06-07 22:59:11.475797] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.317 [2024-06-07 22:59:11.475819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:19.317 #37 NEW cov: 12107 ft: 14424 corp: 19/527b lim: 35 exec/s: 37 rss: 72Mb L: 32/33 MS: 1 ChangeBit- 00:08:19.317 [2024-06-07 22:59:11.535614] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.317 [2024-06-07 22:59:11.535648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:19.317 [2024-06-07 22:59:11.535790] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.317 [2024-06-07 22:59:11.535811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:19.317 [2024-06-07 22:59:11.535951] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.317 [2024-06-07 22:59:11.535973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:19.317 #38 NEW cov: 12107 ft: 14431 corp: 20/559b lim: 35 exec/s: 38 rss: 72Mb L: 32/33 MS: 1 ShuffleBytes- 00:08:19.577 [2024-06-07 22:59:11.596078] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.577 [2024-06-07 22:59:11.596115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:19.577 [2024-06-07 22:59:11.596254] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.577 [2024-06-07 22:59:11.596278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:19.577 [2024-06-07 22:59:11.596415] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.577 [2024-06-07 22:59:11.596437] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:19.577 [2024-06-07 22:59:11.596565] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.577 [2024-06-07 22:59:11.596589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:19.577 #39 NEW cov: 12107 ft: 14678 corp: 21/594b lim: 35 exec/s: 39 rss: 72Mb L: 35/35 MS: 1 CopyPart- 00:08:19.577 [2024-06-07 22:59:11.656056] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.577 [2024-06-07 22:59:11.656096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:19.577 [2024-06-07 22:59:11.656236] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.577 [2024-06-07 22:59:11.656257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:19.577 [2024-06-07 22:59:11.656401] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.577 [2024-06-07 22:59:11.656423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:19.577 #40 NEW cov: 12107 ft: 14686 corp: 22/626b lim: 35 exec/s: 40 rss: 72Mb L: 32/35 MS: 1 ChangeByte- 00:08:19.577 [2024-06-07 22:59:11.715943] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.577 [2024-06-07 22:59:11.715978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:19.577 [2024-06-07 22:59:11.716117] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.577 [2024-06-07 22:59:11.716138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:19.577 #41 NEW cov: 12107 ft: 14704 corp: 23/652b lim: 35 exec/s: 41 rss: 72Mb L: 26/35 MS: 1 ChangeBit- 00:08:19.577 [2024-06-07 22:59:11.776393] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.577 [2024-06-07 22:59:11.776428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:19.577 [2024-06-07 22:59:11.776560] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES LBA RANGE TYPE cid:6 cdw10:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.577 [2024-06-07 22:59:11.776585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:19.577 [2024-06-07 22:59:11.776718] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.577 [2024-06-07 22:59:11.776739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:19.577 NEW_FUNC[1/1]: 0x4b3680 in feat_lba_range_type /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:289 00:08:19.577 #42 NEW cov: 12118 ft: 14747 corp: 24/684b lim: 35 exec/s: 42 rss: 72Mb L: 32/35 MS: 1 CMP- DE: "\001\000\000\003"- 00:08:19.577 [2024-06-07 22:59:11.836539] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.577 [2024-06-07 22:59:11.836572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:19.577 [2024-06-07 22:59:11.836727] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.577 [2024-06-07 22:59:11.836750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:19.577 [2024-06-07 22:59:11.836895] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.577 [2024-06-07 22:59:11.836918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:19.837 #43 NEW cov: 12118 ft: 14751 corp: 25/716b lim: 35 exec/s: 21 rss: 72Mb L: 32/35 MS: 1 ChangeBinInt- 00:08:19.837 #43 DONE cov: 12118 ft: 14751 corp: 25/716b lim: 35 exec/s: 21 rss: 72Mb 00:08:19.837 ###### Recommended dictionary. ###### 00:08:19.837 "\001\000\000\001" # Uses: 1 00:08:19.837 "\000.Yv\021\000\000\000" # Uses: 0 00:08:19.837 "\001\000\000\003" # Uses: 0 00:08:19.837 ###### End of recommended dictionary. ###### 00:08:19.837 Done 43 runs in 2 second(s) 00:08:19.837 22:59:11 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_15.conf /var/tmp/suppress_nvmf_fuzz 00:08:19.837 22:59:12 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:19.837 22:59:12 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:19.837 22:59:12 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 16 1 0x1 00:08:19.837 22:59:12 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=16 00:08:19.837 22:59:12 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:19.837 22:59:12 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:19.837 22:59:12 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:08:19.837 22:59:12 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_16.conf 00:08:19.837 22:59:12 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:19.837 22:59:12 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:19.837 22:59:12 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 16 00:08:19.837 22:59:12 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4416 00:08:19.837 22:59:12 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:08:19.837 22:59:12 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' 00:08:19.837 22:59:12 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4416"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:19.837 22:59:12 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:19.837 22:59:12 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:19.837 22:59:12 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' -c /tmp/fuzz_json_16.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 -Z 16 00:08:19.837 [2024-06-07 22:59:12.045750] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:08:19.837 [2024-06-07 22:59:12.045827] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4156555 ] 00:08:19.837 EAL: No free 2048 kB hugepages reported on node 1 00:08:20.096 [2024-06-07 22:59:12.283704] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.096 [2024-06-07 22:59:12.362136] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.355 [2024-06-07 22:59:12.424451] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:20.356 [2024-06-07 22:59:12.440782] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4416 *** 00:08:20.356 INFO: Running with entropic power schedule (0xFF, 100). 00:08:20.356 INFO: Seed: 2643682314 00:08:20.356 INFO: Loaded 1 modules (357552 inline 8-bit counters): 357552 [0x29a2fcc, 0x29fa47c), 00:08:20.356 INFO: Loaded 1 PC tables (357552 PCs): 357552 [0x29fa480,0x2f6ef80), 00:08:20.356 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:08:20.356 INFO: A corpus is not provided, starting from an empty corpus 00:08:20.356 #2 INITED exec/s: 0 rss: 63Mb 00:08:20.356 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:20.356 This may also happen if the target rejected all inputs we tried so far 00:08:20.356 [2024-06-07 22:59:12.489146] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:7957419012188434030 len:28271 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:20.356 [2024-06-07 22:59:12.489186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.927 NEW_FUNC[1/686]: 0x499940 in fuzz_nvm_read_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:519 00:08:20.927 NEW_FUNC[2/686]: 0x4bf580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:20.927 #8 NEW cov: 11908 ft: 11909 corp: 2/30b lim: 105 exec/s: 0 rss: 70Mb L: 29/29 MS: 1 InsertRepeatedBytes- 00:08:20.927 [2024-06-07 22:59:12.940315] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:7957419012188434030 len:28271 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:20.927 [2024-06-07 22:59:12.940360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.927 #9 NEW cov: 12045 ft: 12555 corp: 3/59b lim: 105 exec/s: 0 rss: 70Mb L: 29/29 MS: 1 ShuffleBytes- 00:08:20.927 [2024-06-07 22:59:13.010732] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:20.927 [2024-06-07 22:59:13.010773] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.927 [2024-06-07 22:59:13.010814] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:20.927 [2024-06-07 22:59:13.010835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:20.927 [2024-06-07 22:59:13.010902] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:20.927 [2024-06-07 22:59:13.010924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:20.927 #10 NEW cov: 12051 ft: 13175 corp: 4/133b lim: 105 exec/s: 0 rss: 70Mb L: 74/74 MS: 1 InsertRepeatedBytes- 00:08:20.927 [2024-06-07 22:59:13.060795] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:20.927 [2024-06-07 22:59:13.060831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.927 [2024-06-07 22:59:13.060875] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:20.927 [2024-06-07 22:59:13.060896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:20.927 [2024-06-07 22:59:13.060962] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:20.927 [2024-06-07 22:59:13.060983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:20.928 #11 NEW cov: 12136 ft: 13471 corp: 5/207b lim: 105 exec/s: 0 rss: 70Mb L: 74/74 MS: 1 ChangeBit- 00:08:20.928 [2024-06-07 22:59:13.130696] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:7957419012188434030 len:28271 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:20.928 [2024-06-07 22:59:13.130732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.928 #12 NEW cov: 12136 ft: 13561 corp: 6/236b lim: 105 exec/s: 0 rss: 70Mb L: 29/74 MS: 1 ShuffleBytes- 00:08:20.928 [2024-06-07 22:59:13.181117] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:20.928 [2024-06-07 22:59:13.181153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.928 [2024-06-07 22:59:13.181198] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:20.928 [2024-06-07 22:59:13.181223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:20.928 [2024-06-07 22:59:13.181289] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:20.928 [2024-06-07 22:59:13.181309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:21.188 #13 NEW cov: 12136 ft: 13635 corp: 7/310b lim: 105 exec/s: 0 rss: 70Mb L: 74/74 MS: 1 ShuffleBytes- 00:08:21.188 [2024-06-07 22:59:13.251049] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:7957419012188434030 len:28271 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.188 [2024-06-07 22:59:13.251086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:21.188 #19 NEW cov: 12136 ft: 13746 corp: 8/339b lim: 105 exec/s: 0 rss: 70Mb L: 29/74 MS: 1 ChangeByte- 00:08:21.188 [2024-06-07 22:59:13.321415] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:4340410373555355452 len:15421 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.188 [2024-06-07 22:59:13.321451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:21.188 [2024-06-07 22:59:13.321496] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:4340410370284600380 len:15421 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.188 [2024-06-07 22:59:13.321517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:21.189 #24 NEW cov: 12136 ft: 14086 corp: 9/394b lim: 105 exec/s: 0 rss: 70Mb L: 55/74 MS: 5 InsertByte-CrossOver-InsertByte-CopyPart-InsertRepeatedBytes- 00:08:21.189 [2024-06-07 22:59:13.371715] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.189 [2024-06-07 22:59:13.371760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:21.189 [2024-06-07 22:59:13.371794] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.189 [2024-06-07 22:59:13.371814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:21.189 [2024-06-07 22:59:13.371879] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.189 [2024-06-07 22:59:13.371900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:21.189 NEW_FUNC[1/1]: 0x1a74020 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:601 00:08:21.189 #25 NEW cov: 12159 ft: 14172 corp: 10/468b lim: 105 exec/s: 0 rss: 71Mb L: 74/74 MS: 1 ShuffleBytes- 00:08:21.189 [2024-06-07 22:59:13.441944] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.189 [2024-06-07 22:59:13.441980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:21.189 [2024-06-07 22:59:13.442032] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.189 [2024-06-07 22:59:13.442052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:21.189 [2024-06-07 22:59:13.442117] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.189 [2024-06-07 22:59:13.442142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:21.447 #26 NEW cov: 12159 ft: 14211 corp: 11/543b lim: 105 exec/s: 0 rss: 71Mb L: 75/75 MS: 1 InsertByte- 00:08:21.447 [2024-06-07 22:59:13.491756] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:7957419012188434030 len:28271 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.447 [2024-06-07 22:59:13.491793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:21.447 #27 NEW cov: 12159 ft: 14322 corp: 12/572b lim: 105 exec/s: 27 rss: 71Mb L: 29/75 MS: 1 CMP- DE: "\377\015>[\237J\245\272"- 00:08:21.447 [2024-06-07 22:59:13.541921] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:7957419012188434030 len:28271 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.447 [2024-06-07 22:59:13.541957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:21.447 #33 NEW cov: 12159 ft: 14334 corp: 13/601b lim: 105 exec/s: 33 rss: 71Mb L: 29/75 MS: 1 ChangeBit- 00:08:21.447 [2024-06-07 22:59:13.582345] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.447 [2024-06-07 22:59:13.582381] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:21.447 [2024-06-07 22:59:13.582416] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.447 [2024-06-07 22:59:13.582437] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:21.447 [2024-06-07 22:59:13.582502] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.447 [2024-06-07 22:59:13.582523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:21.447 #34 NEW cov: 12159 ft: 14359 corp: 14/675b lim: 105 exec/s: 34 rss: 71Mb L: 74/75 MS: 1 ShuffleBytes- 00:08:21.447 [2024-06-07 22:59:13.632452] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.447 [2024-06-07 22:59:13.632488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:21.447 [2024-06-07 22:59:13.632550] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.447 [2024-06-07 22:59:13.632570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:21.447 [2024-06-07 22:59:13.632644] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.447 [2024-06-07 22:59:13.632666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:21.447 #35 NEW cov: 12159 ft: 14374 corp: 15/752b lim: 105 exec/s: 35 rss: 71Mb L: 77/77 MS: 1 CMP- DE: "\001\001"- 00:08:21.447 [2024-06-07 22:59:13.702355] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744070388383743 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.447 [2024-06-07 22:59:13.702390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:21.709 #38 NEW cov: 12159 ft: 14468 corp: 16/792b lim: 105 exec/s: 38 rss: 71Mb L: 40/77 MS: 3 ShuffleBytes-InsertByte-CrossOver- 00:08:21.709 [2024-06-07 22:59:13.742456] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:7957419012188434030 len:28271 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.709 [2024-06-07 22:59:13.742493] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:21.709 #39 NEW cov: 12159 ft: 14506 corp: 17/817b lim: 105 exec/s: 39 rss: 71Mb L: 25/77 MS: 1 EraseBytes- 00:08:21.709 [2024-06-07 22:59:13.792923] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073706930175 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.709 [2024-06-07 22:59:13.792959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:21.709 [2024-06-07 22:59:13.793018] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.709 [2024-06-07 22:59:13.793038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:21.709 [2024-06-07 22:59:13.793105] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.709 [2024-06-07 22:59:13.793128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:21.709 #40 NEW cov: 12159 ft: 14514 corp: 18/893b lim: 105 exec/s: 40 rss: 71Mb L: 76/77 MS: 1 InsertByte- 00:08:21.709 [2024-06-07 22:59:13.842912] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.709 [2024-06-07 22:59:13.842947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:21.709 [2024-06-07 22:59:13.842981] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.709 [2024-06-07 22:59:13.843001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:21.709 #41 NEW cov: 12159 ft: 14545 corp: 19/936b lim: 105 exec/s: 41 rss: 71Mb L: 43/77 MS: 1 EraseBytes- 00:08:21.709 [2024-06-07 22:59:13.893158] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.709 [2024-06-07 22:59:13.893193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:21.709 [2024-06-07 22:59:13.893239] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.709 [2024-06-07 22:59:13.893256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:21.709 [2024-06-07 22:59:13.893323] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.709 [2024-06-07 22:59:13.893345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:21.709 #42 NEW cov: 12159 ft: 14568 corp: 20/1015b lim: 105 exec/s: 42 rss: 71Mb L: 79/79 MS: 1 CrossOver- 00:08:21.709 [2024-06-07 22:59:13.963553] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:7957419012188434030 len:28271 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.709 [2024-06-07 22:59:13.963595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:21.709 [2024-06-07 22:59:13.963664] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:651061555542690057 len:2314 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.709 [2024-06-07 22:59:13.963685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:21.709 [2024-06-07 22:59:13.963757] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:651061555542690057 len:2314 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.709 [2024-06-07 22:59:13.963779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:21.709 [2024-06-07 22:59:13.963848] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:651061555542690057 len:2415 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.709 [2024-06-07 22:59:13.963869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:21.969 #43 NEW cov: 12159 ft: 15072 corp: 21/1105b lim: 105 exec/s: 43 rss: 71Mb L: 90/90 MS: 1 InsertRepeatedBytes- 00:08:21.969 [2024-06-07 22:59:14.033320] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:7957419012188434030 len:28271 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.969 [2024-06-07 22:59:14.033356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:21.969 #44 NEW cov: 12159 ft: 15135 corp: 22/1134b lim: 105 exec/s: 44 rss: 71Mb L: 29/90 MS: 1 ChangeByte- 00:08:21.969 [2024-06-07 22:59:14.093776] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.969 [2024-06-07 22:59:14.093811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:21.969 [2024-06-07 22:59:14.093863] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.969 [2024-06-07 22:59:14.093883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:21.969 [2024-06-07 22:59:14.093949] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.969 [2024-06-07 22:59:14.093970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:21.969 #45 NEW cov: 12159 ft: 15148 corp: 23/1208b lim: 105 exec/s: 45 rss: 71Mb L: 74/90 MS: 1 ChangeBinInt- 00:08:21.969 [2024-06-07 22:59:14.143879] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.969 [2024-06-07 22:59:14.143915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:21.969 [2024-06-07 22:59:14.143967] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744072870690815 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.969 [2024-06-07 22:59:14.143987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:21.969 [2024-06-07 22:59:14.144051] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.969 [2024-06-07 22:59:14.144072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:21.969 #46 NEW cov: 12159 ft: 15180 corp: 24/1282b lim: 105 exec/s: 46 rss: 71Mb L: 74/90 MS: 1 ChangeByte- 00:08:21.969 [2024-06-07 22:59:14.213955] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:1808504320951916825 len:6426 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.969 [2024-06-07 22:59:14.213992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:21.969 [2024-06-07 22:59:14.214029] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:1808504320951916825 len:6426 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.969 [2024-06-07 22:59:14.214055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:21.969 #51 NEW cov: 12159 ft: 15203 corp: 25/1327b lim: 105 exec/s: 51 rss: 72Mb L: 45/90 MS: 5 ShuffleBytes-CopyPart-CopyPart-ChangeBit-InsertRepeatedBytes- 00:08:22.229 [2024-06-07 22:59:14.253897] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:7998392935767764590 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.229 [2024-06-07 22:59:14.253933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:22.229 #52 NEW cov: 12159 ft: 15211 corp: 26/1356b lim: 105 exec/s: 52 rss: 72Mb L: 29/90 MS: 1 CrossOver- 00:08:22.229 [2024-06-07 22:59:14.294434] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:8463800224370226549 len:30070 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.229 [2024-06-07 22:59:14.294470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:22.229 [2024-06-07 22:59:14.294532] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:8463800222054970741 len:30070 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.229 [2024-06-07 22:59:14.294552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:22.229 [2024-06-07 22:59:14.294619] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:8463800222054970741 len:30208 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.229 [2024-06-07 22:59:14.294640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:22.229 [2024-06-07 22:59:14.294709] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.229 [2024-06-07 22:59:14.294730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:22.229 #53 NEW cov: 12159 ft: 15231 corp: 27/1451b lim: 105 exec/s: 53 rss: 72Mb L: 95/95 MS: 1 InsertRepeatedBytes- 00:08:22.229 [2024-06-07 22:59:14.364555] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.229 [2024-06-07 22:59:14.364598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:22.229 [2024-06-07 22:59:14.364653] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.229 [2024-06-07 22:59:14.364674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:22.230 [2024-06-07 22:59:14.364740] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744071260798975 len:512 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.230 [2024-06-07 22:59:14.364762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:22.230 #59 NEW cov: 12159 ft: 15242 corp: 28/1515b lim: 105 exec/s: 59 rss: 72Mb L: 64/95 MS: 1 EraseBytes- 00:08:22.230 [2024-06-07 22:59:14.434455] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744070388383743 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.230 [2024-06-07 22:59:14.434492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:22.230 #60 NEW cov: 12159 ft: 15287 corp: 29/1555b lim: 105 exec/s: 30 rss: 72Mb L: 40/95 MS: 1 ChangeBinInt- 00:08:22.230 #60 DONE cov: 12159 ft: 15287 corp: 29/1555b lim: 105 exec/s: 30 rss: 72Mb 00:08:22.230 ###### Recommended dictionary. ###### 00:08:22.230 "\377\015>[\237J\245\272" # Uses: 0 00:08:22.230 "\001\001" # Uses: 2 00:08:22.230 ###### End of recommended dictionary. ###### 00:08:22.230 Done 60 runs in 2 second(s) 00:08:22.489 22:59:14 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_16.conf /var/tmp/suppress_nvmf_fuzz 00:08:22.489 22:59:14 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:22.489 22:59:14 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:22.489 22:59:14 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 17 1 0x1 00:08:22.489 22:59:14 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=17 00:08:22.489 22:59:14 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:22.489 22:59:14 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:22.489 22:59:14 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:08:22.489 22:59:14 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_17.conf 00:08:22.489 22:59:14 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:22.489 22:59:14 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:22.489 22:59:14 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 17 00:08:22.489 22:59:14 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4417 00:08:22.489 22:59:14 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:08:22.489 22:59:14 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' 00:08:22.489 22:59:14 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4417"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:22.489 22:59:14 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:22.489 22:59:14 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:22.489 22:59:14 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' -c /tmp/fuzz_json_17.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 -Z 17 00:08:22.489 [2024-06-07 22:59:14.666075] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:08:22.489 [2024-06-07 22:59:14.666159] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4157064 ] 00:08:22.489 EAL: No free 2048 kB hugepages reported on node 1 00:08:22.749 [2024-06-07 22:59:14.899010] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.749 [2024-06-07 22:59:14.977736] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.008 [2024-06-07 22:59:15.040166] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:23.008 [2024-06-07 22:59:15.056555] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4417 *** 00:08:23.008 INFO: Running with entropic power schedule (0xFF, 100). 00:08:23.008 INFO: Seed: 965722826 00:08:23.008 INFO: Loaded 1 modules (357552 inline 8-bit counters): 357552 [0x29a2fcc, 0x29fa47c), 00:08:23.008 INFO: Loaded 1 PC tables (357552 PCs): 357552 [0x29fa480,0x2f6ef80), 00:08:23.008 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:08:23.008 INFO: A corpus is not provided, starting from an empty corpus 00:08:23.008 #2 INITED exec/s: 0 rss: 63Mb 00:08:23.008 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:23.008 This may also happen if the target rejected all inputs we tried so far 00:08:23.008 [2024-06-07 22:59:15.114526] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:5353172788943931978 len:19019 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.008 [2024-06-07 22:59:15.114566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:23.008 [2024-06-07 22:59:15.114609] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:5353172790017673802 len:19019 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.008 [2024-06-07 22:59:15.114630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:23.008 [2024-06-07 22:59:15.114696] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:5353172790017673802 len:19019 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.008 [2024-06-07 22:59:15.114717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:23.578 NEW_FUNC[1/687]: 0x49ccc0 in fuzz_nvm_write_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:540 00:08:23.578 NEW_FUNC[2/687]: 0x4bf580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:23.578 #9 NEW cov: 11936 ft: 11937 corp: 2/88b lim: 120 exec/s: 0 rss: 70Mb L: 87/87 MS: 2 CopyPart-InsertRepeatedBytes- 00:08:23.578 [2024-06-07 22:59:15.565694] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:5353172788943931978 len:19019 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.578 [2024-06-07 22:59:15.565738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:23.578 [2024-06-07 22:59:15.565772] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:5353172790017673802 len:19019 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.578 [2024-06-07 22:59:15.565792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:23.578 [2024-06-07 22:59:15.565853] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:5353172790017657418 len:19019 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.578 [2024-06-07 22:59:15.565874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:23.578 [2024-06-07 22:59:15.565935] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:5353172790017673802 len:19019 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.578 [2024-06-07 22:59:15.565956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:23.578 #10 NEW cov: 12066 ft: 13019 corp: 3/198b lim: 120 exec/s: 0 rss: 70Mb L: 110/110 MS: 1 CrossOver- 00:08:23.578 [2024-06-07 22:59:15.635619] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:5353172788943931978 len:19019 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.578 [2024-06-07 22:59:15.635656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:23.578 [2024-06-07 22:59:15.635694] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:5353172790017673802 len:19019 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.578 [2024-06-07 22:59:15.635714] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:23.578 [2024-06-07 22:59:15.635775] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:5353172790017673802 len:19019 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.578 [2024-06-07 22:59:15.635797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:23.578 #11 NEW cov: 12072 ft: 13237 corp: 4/285b lim: 120 exec/s: 0 rss: 70Mb L: 87/110 MS: 1 ChangeBit- 00:08:23.578 [2024-06-07 22:59:15.685757] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:5353172788943931978 len:19019 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.578 [2024-06-07 22:59:15.685794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:23.578 [2024-06-07 22:59:15.685833] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:5353172790017673802 len:19019 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.578 [2024-06-07 22:59:15.685854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:23.578 [2024-06-07 22:59:15.685915] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:5353172790017673802 len:19019 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.579 [2024-06-07 22:59:15.685936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:23.579 #12 NEW cov: 12157 ft: 13486 corp: 5/372b lim: 120 exec/s: 0 rss: 70Mb L: 87/110 MS: 1 ChangeBinInt- 00:08:23.579 [2024-06-07 22:59:15.755793] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:5353172788943931978 len:19019 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.579 [2024-06-07 22:59:15.755828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:23.579 [2024-06-07 22:59:15.755861] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:5353172790017673802 len:19019 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.579 [2024-06-07 22:59:15.755881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:23.579 #13 NEW cov: 12157 ft: 13896 corp: 6/439b lim: 120 exec/s: 0 rss: 70Mb L: 67/110 MS: 1 EraseBytes- 00:08:23.579 [2024-06-07 22:59:15.806111] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:5353172788943931978 len:19019 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.579 [2024-06-07 22:59:15.806147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:23.579 [2024-06-07 22:59:15.806196] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:5353172790017673802 len:19019 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.579 [2024-06-07 22:59:15.806216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:23.579 [2024-06-07 22:59:15.806277] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:5353172790017673802 len:19019 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.579 [2024-06-07 22:59:15.806298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:23.838 #14 NEW cov: 12157 ft: 14066 corp: 7/526b lim: 120 exec/s: 0 rss: 71Mb L: 87/110 MS: 1 CrossOver- 00:08:23.838 [2024-06-07 22:59:15.876293] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:5353172788943931978 len:19019 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.838 [2024-06-07 22:59:15.876330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:23.839 [2024-06-07 22:59:15.876365] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:5353172790017673802 len:19019 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.839 [2024-06-07 22:59:15.876385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:23.839 [2024-06-07 22:59:15.876447] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:5353172790017673802 len:19019 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.839 [2024-06-07 22:59:15.876468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:23.839 #15 NEW cov: 12157 ft: 14147 corp: 8/613b lim: 120 exec/s: 0 rss: 71Mb L: 87/110 MS: 1 ShuffleBytes- 00:08:23.839 [2024-06-07 22:59:15.926456] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:5353172788943931978 len:19019 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.839 [2024-06-07 22:59:15.926495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:23.839 [2024-06-07 22:59:15.926527] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:5353172790017673802 len:19019 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.839 [2024-06-07 22:59:15.926546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:23.839 [2024-06-07 22:59:15.926616] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:5353172790017673802 len:19019 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.839 [2024-06-07 22:59:15.926637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:23.839 #16 NEW cov: 12157 ft: 14175 corp: 9/700b lim: 120 exec/s: 0 rss: 71Mb L: 87/110 MS: 1 CrossOver- 00:08:23.839 [2024-06-07 22:59:15.976618] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:5353172788943931978 len:19019 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.839 [2024-06-07 22:59:15.976653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:23.839 [2024-06-07 22:59:15.976695] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:5353172790017739338 len:19019 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.839 [2024-06-07 22:59:15.976715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:23.839 [2024-06-07 22:59:15.976773] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:5353172790017673802 len:19019 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.839 [2024-06-07 22:59:15.976794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:23.839 NEW_FUNC[1/1]: 0x1a74020 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:601 00:08:23.839 #17 NEW cov: 12180 ft: 14241 corp: 10/787b lim: 120 exec/s: 0 rss: 71Mb L: 87/110 MS: 1 ChangeBit- 00:08:23.839 [2024-06-07 22:59:16.026703] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:5353172788943931978 len:19019 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.839 [2024-06-07 22:59:16.026739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:23.839 [2024-06-07 22:59:16.026784] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:5353172790017673802 len:19019 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.839 [2024-06-07 22:59:16.026804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:23.839 [2024-06-07 22:59:16.026864] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:5353172790017673802 len:19019 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.839 [2024-06-07 22:59:16.026884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:23.839 #18 NEW cov: 12180 ft: 14285 corp: 11/874b lim: 120 exec/s: 0 rss: 71Mb L: 87/110 MS: 1 ChangeBit- 00:08:23.839 [2024-06-07 22:59:16.066678] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:5353172788943931978 len:19019 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.839 [2024-06-07 22:59:16.066713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:23.839 [2024-06-07 22:59:16.066754] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:5335158391508191818 len:19019 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.839 [2024-06-07 22:59:16.066774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:24.149 #19 NEW cov: 12180 ft: 14361 corp: 12/925b lim: 120 exec/s: 19 rss: 71Mb L: 51/110 MS: 1 CrossOver- 00:08:24.149 [2024-06-07 22:59:16.137049] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:5353172788943931978 len:19019 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.149 [2024-06-07 22:59:16.137085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.149 [2024-06-07 22:59:16.137132] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:5353172790017673802 len:19019 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.149 [2024-06-07 22:59:16.137152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:24.149 [2024-06-07 22:59:16.137213] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:5353172790017673802 len:19019 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.149 [2024-06-07 22:59:16.137234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:24.149 #20 NEW cov: 12180 ft: 14388 corp: 13/999b lim: 120 exec/s: 20 rss: 71Mb L: 74/110 MS: 1 CrossOver- 00:08:24.149 [2024-06-07 22:59:16.187198] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:5353172788943931978 len:19019 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.149 [2024-06-07 22:59:16.187234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.149 [2024-06-07 22:59:16.187279] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:5353172790017673802 len:19019 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.149 [2024-06-07 22:59:16.187299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:24.149 [2024-06-07 22:59:16.187358] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:5353172790017673802 len:19019 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.149 [2024-06-07 22:59:16.187379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:24.149 #21 NEW cov: 12180 ft: 14412 corp: 14/1086b lim: 120 exec/s: 21 rss: 71Mb L: 87/110 MS: 1 ChangeByte- 00:08:24.149 [2024-06-07 22:59:16.237313] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:5353172788943931978 len:19019 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.149 [2024-06-07 22:59:16.237349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.149 [2024-06-07 22:59:16.237391] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:5353172788771310154 len:19019 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.149 [2024-06-07 22:59:16.237412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:24.150 [2024-06-07 22:59:16.237474] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:5353172790017673802 len:19019 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.150 [2024-06-07 22:59:16.237494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:24.150 #22 NEW cov: 12180 ft: 14425 corp: 15/1177b lim: 120 exec/s: 22 rss: 71Mb L: 91/110 MS: 1 CMP- DE: "F\000\000\000"- 00:08:24.150 [2024-06-07 22:59:16.277434] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:5353172788943931978 len:19019 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.150 [2024-06-07 22:59:16.277469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.150 [2024-06-07 22:59:16.277510] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:5353172790017739338 len:19019 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.150 [2024-06-07 22:59:16.277533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:24.150 [2024-06-07 22:59:16.277600] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:5353172790017673802 len:19019 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.150 [2024-06-07 22:59:16.277620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:24.150 #23 NEW cov: 12180 ft: 14427 corp: 16/1264b lim: 120 exec/s: 23 rss: 71Mb L: 87/110 MS: 1 ShuffleBytes- 00:08:24.150 [2024-06-07 22:59:16.347653] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:5353172788943931978 len:19019 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.150 [2024-06-07 22:59:16.347688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.150 [2024-06-07 22:59:16.347738] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:5353172790017673802 len:19019 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.150 [2024-06-07 22:59:16.347758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:24.150 [2024-06-07 22:59:16.347818] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:5353172790017673802 len:19019 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.150 [2024-06-07 22:59:16.347839] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:24.150 #24 NEW cov: 12180 ft: 14452 corp: 17/1351b lim: 120 exec/s: 24 rss: 71Mb L: 87/110 MS: 1 ChangeBinInt- 00:08:24.150 [2024-06-07 22:59:16.387951] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:5353172788943931978 len:19019 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.150 [2024-06-07 22:59:16.387987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.150 [2024-06-07 22:59:16.388053] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:5353172790017673802 len:19019 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.150 [2024-06-07 22:59:16.388074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:24.150 [2024-06-07 22:59:16.388134] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:19019 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.150 [2024-06-07 22:59:16.388154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:24.150 [2024-06-07 22:59:16.388215] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:5353172790017673802 len:19019 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.150 [2024-06-07 22:59:16.388235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:24.440 #25 NEW cov: 12180 ft: 14486 corp: 18/1459b lim: 120 exec/s: 25 rss: 71Mb L: 108/110 MS: 1 InsertRepeatedBytes- 00:08:24.440 [2024-06-07 22:59:16.457943] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:5353172788943931978 len:19019 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.440 [2024-06-07 22:59:16.457982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.440 [2024-06-07 22:59:16.458026] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:5353172790017673802 len:19019 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.440 [2024-06-07 22:59:16.458046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:24.440 [2024-06-07 22:59:16.458107] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:5353172790017673802 len:2379 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.440 [2024-06-07 22:59:16.458132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:24.440 #26 NEW cov: 12180 ft: 14503 corp: 19/1546b lim: 120 exec/s: 26 rss: 71Mb L: 87/110 MS: 1 ChangeByte- 00:08:24.440 [2024-06-07 22:59:16.507916] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:5353172788943931978 len:19019 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.440 [2024-06-07 22:59:16.507952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.440 [2024-06-07 22:59:16.507992] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:5335158391508191818 len:19019 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.440 [2024-06-07 22:59:16.508012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:24.440 #27 NEW cov: 12180 ft: 14590 corp: 20/1597b lim: 120 exec/s: 27 rss: 71Mb L: 51/110 MS: 1 ChangeBit- 00:08:24.440 [2024-06-07 22:59:16.578508] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:5353172788943931978 len:19019 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.440 [2024-06-07 22:59:16.578543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.440 [2024-06-07 22:59:16.578602] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:6221254864074593878 len:19019 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.440 [2024-06-07 22:59:16.578623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:24.440 [2024-06-07 22:59:16.578684] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:5353172790017673802 len:19019 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.440 [2024-06-07 22:59:16.578704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:24.440 [2024-06-07 22:59:16.578765] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:5353172790017673802 len:19019 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.440 [2024-06-07 22:59:16.578786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:24.440 #28 NEW cov: 12180 ft: 14618 corp: 21/1703b lim: 120 exec/s: 28 rss: 71Mb L: 106/110 MS: 1 InsertRepeatedBytes- 00:08:24.440 [2024-06-07 22:59:16.628434] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:5353172788943931978 len:19019 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.440 [2024-06-07 22:59:16.628469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.440 [2024-06-07 22:59:16.628509] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:5353172790017673802 len:19019 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.440 [2024-06-07 22:59:16.628529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:24.440 [2024-06-07 22:59:16.628597] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:5353172790017673802 len:19019 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.440 [2024-06-07 22:59:16.628618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:24.440 #29 NEW cov: 12180 ft: 14632 corp: 22/1790b lim: 120 exec/s: 29 rss: 72Mb L: 87/110 MS: 1 CrossOver- 00:08:24.440 [2024-06-07 22:59:16.698499] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:5353172788943931978 len:19019 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.440 [2024-06-07 22:59:16.698534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.440 [2024-06-07 22:59:16.698567] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:5353172790017673802 len:19019 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.440 [2024-06-07 22:59:16.698598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:24.699 #30 NEW cov: 12180 ft: 14700 corp: 23/1838b lim: 120 exec/s: 30 rss: 72Mb L: 48/110 MS: 1 EraseBytes- 00:08:24.699 [2024-06-07 22:59:16.748778] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:5353172788943931978 len:19019 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.699 [2024-06-07 22:59:16.748815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.699 [2024-06-07 22:59:16.748862] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:5353172788771310154 len:19019 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.699 [2024-06-07 22:59:16.748881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:24.699 [2024-06-07 22:59:16.748941] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:5353172790017673802 len:19019 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.699 [2024-06-07 22:59:16.748962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:24.699 #31 NEW cov: 12180 ft: 14711 corp: 24/1929b lim: 120 exec/s: 31 rss: 72Mb L: 91/110 MS: 1 ChangeByte- 00:08:24.699 [2024-06-07 22:59:16.819150] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:5353172788943931978 len:19019 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.699 [2024-06-07 22:59:16.819186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.699 [2024-06-07 22:59:16.819237] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:5353173472917473866 len:59882 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.699 [2024-06-07 22:59:16.819256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:24.699 [2024-06-07 22:59:16.819314] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:16855259588372064745 len:19019 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.699 [2024-06-07 22:59:16.819333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:24.699 [2024-06-07 22:59:16.819395] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:5353172790017673802 len:19019 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.699 [2024-06-07 22:59:16.819415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:24.699 #32 NEW cov: 12180 ft: 14735 corp: 25/2040b lim: 120 exec/s: 32 rss: 72Mb L: 111/111 MS: 1 InsertRepeatedBytes- 00:08:24.699 [2024-06-07 22:59:16.889008] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:5353172788943931978 len:19019 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.699 [2024-06-07 22:59:16.889044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.699 [2024-06-07 22:59:16.889080] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:5353172790017673802 len:19019 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.699 [2024-06-07 22:59:16.889100] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:24.699 #33 NEW cov: 12180 ft: 14742 corp: 26/2105b lim: 120 exec/s: 33 rss: 72Mb L: 65/111 MS: 1 EraseBytes- 00:08:24.699 [2024-06-07 22:59:16.939471] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:5353172788943931978 len:19157 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.699 [2024-06-07 22:59:16.939506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.699 [2024-06-07 22:59:16.939557] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:15336116641672254676 len:54485 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.699 [2024-06-07 22:59:16.939585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:24.699 [2024-06-07 22:59:16.939643] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:5353172790017673802 len:19019 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.699 [2024-06-07 22:59:16.939664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:24.699 [2024-06-07 22:59:16.939722] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:5353172790017673802 len:19019 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.700 [2024-06-07 22:59:16.939742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:24.700 #34 NEW cov: 12180 ft: 14775 corp: 27/2222b lim: 120 exec/s: 34 rss: 72Mb L: 117/117 MS: 1 InsertRepeatedBytes- 00:08:24.958 [2024-06-07 22:59:16.989448] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:5353172788943931978 len:19019 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.958 [2024-06-07 22:59:16.989484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.958 [2024-06-07 22:59:16.989526] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:5353172790017673802 len:19019 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.958 [2024-06-07 22:59:16.989543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:24.958 [2024-06-07 22:59:16.989609] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:5353172790017673802 len:19275 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.958 [2024-06-07 22:59:16.989630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:24.958 #35 NEW cov: 12180 ft: 14778 corp: 28/2309b lim: 120 exec/s: 35 rss: 72Mb L: 87/117 MS: 1 ChangeBit- 00:08:24.958 [2024-06-07 22:59:17.059591] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:5353172788943931978 len:19019 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.958 [2024-06-07 22:59:17.059626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.958 [2024-06-07 22:59:17.059681] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:5353172790017673802 len:19019 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.958 [2024-06-07 22:59:17.059702] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:24.958 [2024-06-07 22:59:17.059764] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:5353172790017673802 len:19019 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.958 [2024-06-07 22:59:17.059784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:24.958 #36 NEW cov: 12180 ft: 14794 corp: 29/2396b lim: 120 exec/s: 36 rss: 72Mb L: 87/117 MS: 1 ShuffleBytes- 00:08:24.959 [2024-06-07 22:59:17.099435] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:5353172788939737674 len:19019 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.959 [2024-06-07 22:59:17.099471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.959 #37 NEW cov: 12180 ft: 15676 corp: 30/2430b lim: 120 exec/s: 18 rss: 72Mb L: 34/117 MS: 1 CrossOver- 00:08:24.959 #37 DONE cov: 12180 ft: 15676 corp: 30/2430b lim: 120 exec/s: 18 rss: 72Mb 00:08:24.959 ###### Recommended dictionary. ###### 00:08:24.959 "F\000\000\000" # Uses: 0 00:08:24.959 ###### End of recommended dictionary. ###### 00:08:24.959 Done 37 runs in 2 second(s) 00:08:25.218 22:59:17 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_17.conf /var/tmp/suppress_nvmf_fuzz 00:08:25.218 22:59:17 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:25.218 22:59:17 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:25.218 22:59:17 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 18 1 0x1 00:08:25.218 22:59:17 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=18 00:08:25.218 22:59:17 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:25.218 22:59:17 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:25.218 22:59:17 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:08:25.218 22:59:17 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_18.conf 00:08:25.218 22:59:17 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:25.218 22:59:17 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:25.218 22:59:17 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 18 00:08:25.218 22:59:17 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4418 00:08:25.218 22:59:17 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:08:25.218 22:59:17 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' 00:08:25.218 22:59:17 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4418"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:25.218 22:59:17 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:25.218 22:59:17 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:25.218 22:59:17 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' -c /tmp/fuzz_json_18.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 -Z 18 00:08:25.218 [2024-06-07 22:59:17.321365] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:08:25.218 [2024-06-07 22:59:17.321439] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4157387 ] 00:08:25.218 EAL: No free 2048 kB hugepages reported on node 1 00:08:25.477 [2024-06-07 22:59:17.558939] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.478 [2024-06-07 22:59:17.641287] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.478 [2024-06-07 22:59:17.703675] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:25.478 [2024-06-07 22:59:17.720058] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4418 *** 00:08:25.478 INFO: Running with entropic power schedule (0xFF, 100). 00:08:25.478 INFO: Seed: 3628721626 00:08:25.736 INFO: Loaded 1 modules (357552 inline 8-bit counters): 357552 [0x29a2fcc, 0x29fa47c), 00:08:25.736 INFO: Loaded 1 PC tables (357552 PCs): 357552 [0x29fa480,0x2f6ef80), 00:08:25.736 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:08:25.737 INFO: A corpus is not provided, starting from an empty corpus 00:08:25.737 #2 INITED exec/s: 0 rss: 63Mb 00:08:25.737 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:25.737 This may also happen if the target rejected all inputs we tried so far 00:08:25.737 [2024-06-07 22:59:17.790418] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:25.737 [2024-06-07 22:59:17.790461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:25.737 [2024-06-07 22:59:17.790501] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:25.737 [2024-06-07 22:59:17.790524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:25.737 [2024-06-07 22:59:17.790646] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:25.737 [2024-06-07 22:59:17.790667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:25.995 NEW_FUNC[1/685]: 0x4a05b0 in fuzz_nvm_write_zeroes_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:562 00:08:25.995 NEW_FUNC[2/685]: 0x4bf580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:25.995 #15 NEW cov: 11874 ft: 11875 corp: 2/68b lim: 100 exec/s: 0 rss: 70Mb L: 67/67 MS: 3 CopyPart-ShuffleBytes-InsertRepeatedBytes- 00:08:25.995 [2024-06-07 22:59:18.241344] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:25.995 [2024-06-07 22:59:18.241384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:25.995 [2024-06-07 22:59:18.241413] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:25.995 [2024-06-07 22:59:18.241431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:25.995 [2024-06-07 22:59:18.241544] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:25.995 [2024-06-07 22:59:18.241566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:26.254 #16 NEW cov: 12009 ft: 12395 corp: 3/135b lim: 100 exec/s: 0 rss: 70Mb L: 67/67 MS: 1 ChangeByte- 00:08:26.254 [2024-06-07 22:59:18.291291] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:26.254 [2024-06-07 22:59:18.291321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:26.254 [2024-06-07 22:59:18.291399] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:26.254 [2024-06-07 22:59:18.291424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:26.254 [2024-06-07 22:59:18.291542] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:26.254 [2024-06-07 22:59:18.291564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:26.254 #17 NEW cov: 12015 ft: 12673 corp: 4/202b lim: 100 exec/s: 0 rss: 70Mb L: 67/67 MS: 1 ChangeBinInt- 00:08:26.254 [2024-06-07 22:59:18.331442] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:26.254 [2024-06-07 22:59:18.331470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:26.254 [2024-06-07 22:59:18.331545] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:26.254 [2024-06-07 22:59:18.331569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:26.254 [2024-06-07 22:59:18.331695] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:26.254 [2024-06-07 22:59:18.331717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:26.254 #23 NEW cov: 12100 ft: 12963 corp: 5/269b lim: 100 exec/s: 0 rss: 70Mb L: 67/67 MS: 1 ChangeByte- 00:08:26.254 [2024-06-07 22:59:18.381713] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:26.254 [2024-06-07 22:59:18.381745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:26.255 [2024-06-07 22:59:18.381820] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:26.255 [2024-06-07 22:59:18.381846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:26.255 [2024-06-07 22:59:18.381953] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:26.255 [2024-06-07 22:59:18.381973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:26.255 #24 NEW cov: 12100 ft: 13063 corp: 6/337b lim: 100 exec/s: 0 rss: 70Mb L: 68/68 MS: 1 InsertByte- 00:08:26.255 [2024-06-07 22:59:18.431828] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:26.255 [2024-06-07 22:59:18.431856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:26.255 [2024-06-07 22:59:18.431936] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:26.255 [2024-06-07 22:59:18.431960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:26.255 [2024-06-07 22:59:18.432069] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:26.255 [2024-06-07 22:59:18.432091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:26.255 #30 NEW cov: 12100 ft: 13190 corp: 7/404b lim: 100 exec/s: 0 rss: 70Mb L: 67/68 MS: 1 ChangeByte- 00:08:26.255 [2024-06-07 22:59:18.472018] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:26.255 [2024-06-07 22:59:18.472047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:26.255 [2024-06-07 22:59:18.472142] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:26.255 [2024-06-07 22:59:18.472162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:26.255 [2024-06-07 22:59:18.472275] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:26.255 [2024-06-07 22:59:18.472298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:26.255 #31 NEW cov: 12100 ft: 13265 corp: 8/471b lim: 100 exec/s: 0 rss: 70Mb L: 67/68 MS: 1 CopyPart- 00:08:26.255 [2024-06-07 22:59:18.512086] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:26.255 [2024-06-07 22:59:18.512116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:26.255 [2024-06-07 22:59:18.512191] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:26.255 [2024-06-07 22:59:18.512215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:26.255 [2024-06-07 22:59:18.512322] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:26.255 [2024-06-07 22:59:18.512346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:26.513 #32 NEW cov: 12100 ft: 13295 corp: 9/538b lim: 100 exec/s: 0 rss: 71Mb L: 67/68 MS: 1 ChangeBinInt- 00:08:26.513 [2024-06-07 22:59:18.561996] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:26.513 [2024-06-07 22:59:18.562025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:26.513 [2024-06-07 22:59:18.562117] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:26.513 [2024-06-07 22:59:18.562138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:26.513 [2024-06-07 22:59:18.562247] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:26.513 [2024-06-07 22:59:18.562269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:26.513 #33 NEW cov: 12100 ft: 13369 corp: 10/606b lim: 100 exec/s: 0 rss: 71Mb L: 68/68 MS: 1 ShuffleBytes- 00:08:26.513 [2024-06-07 22:59:18.612390] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:26.513 [2024-06-07 22:59:18.612421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:26.513 [2024-06-07 22:59:18.612503] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:26.513 [2024-06-07 22:59:18.612513] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:26.513 [2024-06-07 22:59:18.612614] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:26.513 [2024-06-07 22:59:18.612630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:26.513 #34 NEW cov: 12100 ft: 13390 corp: 11/673b lim: 100 exec/s: 0 rss: 71Mb L: 67/68 MS: 1 ChangeBit- 00:08:26.513 [2024-06-07 22:59:18.652494] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:26.513 [2024-06-07 22:59:18.652526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:26.513 [2024-06-07 22:59:18.652628] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:26.513 [2024-06-07 22:59:18.652646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:26.513 [2024-06-07 22:59:18.652749] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:26.513 [2024-06-07 22:59:18.652766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:26.513 NEW_FUNC[1/1]: 0x1a74020 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:601 00:08:26.513 #35 NEW cov: 12123 ft: 13466 corp: 12/740b lim: 100 exec/s: 0 rss: 71Mb L: 67/68 MS: 1 ChangeByte- 00:08:26.513 [2024-06-07 22:59:18.692413] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:26.513 [2024-06-07 22:59:18.692442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:26.513 [2024-06-07 22:59:18.692540] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:26.513 [2024-06-07 22:59:18.692554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:26.513 [2024-06-07 22:59:18.692668] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:26.513 [2024-06-07 22:59:18.692687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:26.513 #36 NEW cov: 12123 ft: 13540 corp: 13/807b lim: 100 exec/s: 0 rss: 71Mb L: 67/68 MS: 1 ChangeByte- 00:08:26.513 [2024-06-07 22:59:18.742990] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:26.513 [2024-06-07 22:59:18.743020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:26.513 [2024-06-07 22:59:18.743123] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:26.513 [2024-06-07 22:59:18.743133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:26.513 [2024-06-07 22:59:18.743243] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:26.514 [2024-06-07 22:59:18.743264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:26.514 [2024-06-07 22:59:18.743375] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:26.514 [2024-06-07 22:59:18.743398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:26.514 #37 NEW cov: 12123 ft: 13808 corp: 14/899b lim: 100 exec/s: 37 rss: 71Mb L: 92/92 MS: 1 InsertRepeatedBytes- 00:08:26.772 [2024-06-07 22:59:18.792872] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:26.772 [2024-06-07 22:59:18.792900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:26.772 [2024-06-07 22:59:18.792987] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:26.772 [2024-06-07 22:59:18.792996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:26.772 [2024-06-07 22:59:18.793127] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:26.772 [2024-06-07 22:59:18.793150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:26.772 #38 NEW cov: 12123 ft: 13845 corp: 15/967b lim: 100 exec/s: 38 rss: 71Mb L: 68/92 MS: 1 CopyPart- 00:08:26.772 [2024-06-07 22:59:18.843306] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:26.772 [2024-06-07 22:59:18.843338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:26.772 [2024-06-07 22:59:18.843443] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:26.772 [2024-06-07 22:59:18.843456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:26.772 [2024-06-07 22:59:18.843568] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:26.772 [2024-06-07 22:59:18.843594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:26.772 [2024-06-07 22:59:18.843709] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:26.772 [2024-06-07 22:59:18.843730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:26.772 #39 NEW cov: 12123 ft: 13852 corp: 16/1059b lim: 100 exec/s: 39 rss: 72Mb L: 92/92 MS: 1 ChangeBit- 00:08:26.772 [2024-06-07 22:59:18.892708] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:26.772 [2024-06-07 22:59:18.892740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:26.772 [2024-06-07 22:59:18.892821] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:26.772 [2024-06-07 22:59:18.892843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:26.772 [2024-06-07 22:59:18.892954] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:26.772 [2024-06-07 22:59:18.892975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:26.772 #40 NEW cov: 12123 ft: 13893 corp: 17/1129b lim: 100 exec/s: 40 rss: 72Mb L: 70/92 MS: 1 CrossOver- 00:08:26.772 [2024-06-07 22:59:18.943335] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:26.772 [2024-06-07 22:59:18.943365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:26.772 [2024-06-07 22:59:18.943444] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:26.772 [2024-06-07 22:59:18.943465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:26.772 [2024-06-07 22:59:18.943581] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:26.773 [2024-06-07 22:59:18.943600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:26.773 #41 NEW cov: 12123 ft: 13914 corp: 18/1196b lim: 100 exec/s: 41 rss: 72Mb L: 67/92 MS: 1 CMP- DE: "\377\000"- 00:08:26.773 [2024-06-07 22:59:19.003740] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:26.773 [2024-06-07 22:59:19.003771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:26.773 [2024-06-07 22:59:19.003875] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:26.773 [2024-06-07 22:59:19.003885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:26.773 [2024-06-07 22:59:19.003991] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:26.773 [2024-06-07 22:59:19.004010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:26.773 [2024-06-07 22:59:19.004122] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:26.773 [2024-06-07 22:59:19.004144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:26.773 #42 NEW cov: 12123 ft: 14063 corp: 19/1286b lim: 100 exec/s: 42 rss: 72Mb L: 90/92 MS: 1 InsertRepeatedBytes- 00:08:26.773 [2024-06-07 22:59:19.043131] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:26.773 [2024-06-07 22:59:19.043160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:26.773 [2024-06-07 22:59:19.043257] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:26.773 [2024-06-07 22:59:19.043267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:26.773 [2024-06-07 22:59:19.043385] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:26.773 [2024-06-07 22:59:19.043409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:27.032 #43 NEW cov: 12123 ft: 14084 corp: 20/1359b lim: 100 exec/s: 43 rss: 72Mb L: 73/92 MS: 1 InsertRepeatedBytes- 00:08:27.032 [2024-06-07 22:59:19.083854] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:27.032 [2024-06-07 22:59:19.083885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:27.032 [2024-06-07 22:59:19.083968] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:27.032 [2024-06-07 22:59:19.083985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:27.032 [2024-06-07 22:59:19.084102] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:27.032 [2024-06-07 22:59:19.084125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:27.032 [2024-06-07 22:59:19.084242] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:27.032 [2024-06-07 22:59:19.084265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:27.032 #44 NEW cov: 12123 ft: 14100 corp: 21/1451b lim: 100 exec/s: 44 rss: 72Mb L: 92/92 MS: 1 CopyPart- 00:08:27.032 [2024-06-07 22:59:19.133812] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:27.032 [2024-06-07 22:59:19.133842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:27.032 [2024-06-07 22:59:19.133935] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:27.032 [2024-06-07 22:59:19.133945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:27.032 [2024-06-07 22:59:19.134052] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:27.032 [2024-06-07 22:59:19.134074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:27.032 #45 NEW cov: 12123 ft: 14127 corp: 22/1520b lim: 100 exec/s: 45 rss: 72Mb L: 69/92 MS: 1 CrossOver- 00:08:27.032 [2024-06-07 22:59:19.183993] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:27.032 [2024-06-07 22:59:19.184023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:27.032 [2024-06-07 22:59:19.184098] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:27.032 [2024-06-07 22:59:19.184119] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:27.032 [2024-06-07 22:59:19.184229] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:27.032 [2024-06-07 22:59:19.184247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:27.032 #46 NEW cov: 12123 ft: 14175 corp: 23/1588b lim: 100 exec/s: 46 rss: 72Mb L: 68/92 MS: 1 InsertByte- 00:08:27.032 [2024-06-07 22:59:19.234373] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:27.032 [2024-06-07 22:59:19.234401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:27.032 [2024-06-07 22:59:19.234496] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:27.032 [2024-06-07 22:59:19.234506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:27.032 [2024-06-07 22:59:19.234637] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:27.032 [2024-06-07 22:59:19.234659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:27.032 [2024-06-07 22:59:19.234775] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:27.032 [2024-06-07 22:59:19.234796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:27.032 #47 NEW cov: 12123 ft: 14198 corp: 24/1681b lim: 100 exec/s: 47 rss: 73Mb L: 93/93 MS: 1 InsertByte- 00:08:27.032 [2024-06-07 22:59:19.284244] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:27.032 [2024-06-07 22:59:19.284273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:27.032 [2024-06-07 22:59:19.284370] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:27.032 [2024-06-07 22:59:19.284379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:27.032 [2024-06-07 22:59:19.284495] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:27.032 [2024-06-07 22:59:19.284521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:27.032 #48 NEW cov: 12123 ft: 14212 corp: 25/1749b lim: 100 exec/s: 48 rss: 73Mb L: 68/93 MS: 1 InsertByte- 00:08:27.291 [2024-06-07 22:59:19.324453] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:27.291 [2024-06-07 22:59:19.324481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:27.291 [2024-06-07 22:59:19.324566] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:27.291 [2024-06-07 22:59:19.324580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:27.291 [2024-06-07 22:59:19.324715] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:27.291 [2024-06-07 22:59:19.324737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:27.291 #49 NEW cov: 12123 ft: 14219 corp: 26/1817b lim: 100 exec/s: 49 rss: 73Mb L: 68/93 MS: 1 ChangeBinInt- 00:08:27.291 [2024-06-07 22:59:19.364147] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:27.291 [2024-06-07 22:59:19.364176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:27.291 #50 NEW cov: 12123 ft: 14654 corp: 27/1841b lim: 100 exec/s: 50 rss: 73Mb L: 24/93 MS: 1 CrossOver- 00:08:27.291 [2024-06-07 22:59:19.404654] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:27.291 [2024-06-07 22:59:19.404682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:27.291 [2024-06-07 22:59:19.404791] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:27.291 [2024-06-07 22:59:19.404801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:27.291 [2024-06-07 22:59:19.404911] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:27.291 [2024-06-07 22:59:19.404931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:27.291 #51 NEW cov: 12123 ft: 14680 corp: 28/1916b lim: 100 exec/s: 51 rss: 73Mb L: 75/93 MS: 1 CMP- DE: "\000\016>_\016\223\253\012"- 00:08:27.291 [2024-06-07 22:59:19.444908] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:27.291 [2024-06-07 22:59:19.444938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:27.291 [2024-06-07 22:59:19.445008] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:27.291 [2024-06-07 22:59:19.445020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:27.291 [2024-06-07 22:59:19.445130] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:27.291 [2024-06-07 22:59:19.445150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:27.291 #52 NEW cov: 12123 ft: 14684 corp: 29/1988b lim: 100 exec/s: 52 rss: 73Mb L: 72/93 MS: 1 CMP- DE: "\001\000\000\000"- 00:08:27.291 [2024-06-07 22:59:19.485172] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:27.291 [2024-06-07 22:59:19.485201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:27.291 [2024-06-07 22:59:19.485301] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:27.291 [2024-06-07 22:59:19.485311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:27.291 [2024-06-07 22:59:19.485427] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:27.291 [2024-06-07 22:59:19.485450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:27.291 [2024-06-07 22:59:19.485559] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:27.291 [2024-06-07 22:59:19.485581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:27.291 #53 NEW cov: 12123 ft: 14711 corp: 30/2074b lim: 100 exec/s: 53 rss: 73Mb L: 86/93 MS: 1 CopyPart- 00:08:27.291 [2024-06-07 22:59:19.524667] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:27.291 [2024-06-07 22:59:19.524696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:27.291 [2024-06-07 22:59:19.524787] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:27.291 [2024-06-07 22:59:19.524797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:27.291 [2024-06-07 22:59:19.524914] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:27.291 [2024-06-07 22:59:19.524929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:27.291 #54 NEW cov: 12123 ft: 14730 corp: 31/2142b lim: 100 exec/s: 54 rss: 73Mb L: 68/93 MS: 1 ShuffleBytes- 00:08:27.551 [2024-06-07 22:59:19.575124] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:27.551 [2024-06-07 22:59:19.575155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:27.551 [2024-06-07 22:59:19.575226] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:27.551 [2024-06-07 22:59:19.575241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:27.551 [2024-06-07 22:59:19.575370] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:27.551 [2024-06-07 22:59:19.575392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:27.551 #55 NEW cov: 12123 ft: 14755 corp: 32/2209b lim: 100 exec/s: 55 rss: 73Mb L: 67/93 MS: 1 ChangeByte- 00:08:27.551 [2024-06-07 22:59:19.615320] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:27.551 [2024-06-07 22:59:19.615347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:27.551 [2024-06-07 22:59:19.615429] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:27.551 [2024-06-07 22:59:19.615438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:27.551 [2024-06-07 22:59:19.615550] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:27.551 [2024-06-07 22:59:19.615570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:27.551 #56 NEW cov: 12123 ft: 14784 corp: 33/2282b lim: 100 exec/s: 56 rss: 73Mb L: 73/93 MS: 1 InsertRepeatedBytes- 00:08:27.551 [2024-06-07 22:59:19.655381] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:27.551 [2024-06-07 22:59:19.655408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:27.551 [2024-06-07 22:59:19.655504] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:27.551 [2024-06-07 22:59:19.655520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:27.551 [2024-06-07 22:59:19.655630] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:27.551 [2024-06-07 22:59:19.655650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:27.551 #57 NEW cov: 12123 ft: 14792 corp: 34/2350b lim: 100 exec/s: 57 rss: 73Mb L: 68/93 MS: 1 InsertByte- 00:08:27.551 [2024-06-07 22:59:19.705110] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:27.551 [2024-06-07 22:59:19.705139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:27.551 [2024-06-07 22:59:19.705219] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:27.551 [2024-06-07 22:59:19.705230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:27.551 [2024-06-07 22:59:19.705343] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:27.551 [2024-06-07 22:59:19.705360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:27.551 #58 NEW cov: 12123 ft: 14799 corp: 35/2421b lim: 100 exec/s: 58 rss: 73Mb L: 71/93 MS: 1 EraseBytes- 00:08:27.551 [2024-06-07 22:59:19.766023] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:27.551 [2024-06-07 22:59:19.766054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:27.551 [2024-06-07 22:59:19.766156] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:27.551 [2024-06-07 22:59:19.766166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:27.551 [2024-06-07 22:59:19.766279] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:27.551 [2024-06-07 22:59:19.766300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:27.551 [2024-06-07 22:59:19.766417] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:27.551 [2024-06-07 22:59:19.766440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:27.551 #59 NEW cov: 12123 ft: 14818 corp: 36/2514b lim: 100 exec/s: 29 rss: 74Mb L: 93/93 MS: 1 CrossOver- 00:08:27.551 #59 DONE cov: 12123 ft: 14818 corp: 36/2514b lim: 100 exec/s: 29 rss: 74Mb 00:08:27.551 ###### Recommended dictionary. ###### 00:08:27.551 "\377\000" # Uses: 0 00:08:27.551 "\000\016>_\016\223\253\012" # Uses: 0 00:08:27.551 "\001\000\000\000" # Uses: 0 00:08:27.551 ###### End of recommended dictionary. ###### 00:08:27.551 Done 59 runs in 2 second(s) 00:08:27.810 22:59:19 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_18.conf /var/tmp/suppress_nvmf_fuzz 00:08:27.810 22:59:19 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:27.810 22:59:19 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:27.810 22:59:19 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 19 1 0x1 00:08:27.810 22:59:19 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=19 00:08:27.810 22:59:19 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:27.810 22:59:19 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:27.810 22:59:19 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:08:27.810 22:59:19 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_19.conf 00:08:27.810 22:59:19 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:27.810 22:59:19 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:27.810 22:59:19 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 19 00:08:27.810 22:59:19 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4419 00:08:27.810 22:59:19 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:08:27.810 22:59:19 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' 00:08:27.810 22:59:19 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4419"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:27.810 22:59:19 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:27.810 22:59:19 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:27.810 22:59:19 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' -c /tmp/fuzz_json_19.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 -Z 19 00:08:27.810 [2024-06-07 22:59:19.988022] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:08:27.810 [2024-06-07 22:59:19.988115] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4157918 ] 00:08:27.810 EAL: No free 2048 kB hugepages reported on node 1 00:08:28.069 [2024-06-07 22:59:20.226564] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.069 [2024-06-07 22:59:20.307853] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.327 [2024-06-07 22:59:20.370167] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:28.327 [2024-06-07 22:59:20.386495] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4419 *** 00:08:28.328 INFO: Running with entropic power schedule (0xFF, 100). 00:08:28.328 INFO: Seed: 2001750039 00:08:28.328 INFO: Loaded 1 modules (357552 inline 8-bit counters): 357552 [0x29a2fcc, 0x29fa47c), 00:08:28.328 INFO: Loaded 1 PC tables (357552 PCs): 357552 [0x29fa480,0x2f6ef80), 00:08:28.328 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:08:28.328 INFO: A corpus is not provided, starting from an empty corpus 00:08:28.328 #2 INITED exec/s: 0 rss: 62Mb 00:08:28.328 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:28.328 This may also happen if the target rejected all inputs we tried so far 00:08:28.328 [2024-06-07 22:59:20.463241] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:637534208 len:1 00:08:28.328 [2024-06-07 22:59:20.463289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:28.328 [2024-06-07 22:59:20.463390] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:08:28.328 [2024-06-07 22:59:20.463418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:28.328 [2024-06-07 22:59:20.463538] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:08:28.328 [2024-06-07 22:59:20.463564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:28.328 [2024-06-07 22:59:20.463691] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:0 len:1 00:08:28.328 [2024-06-07 22:59:20.463714] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:28.895 NEW_FUNC[1/685]: 0x4a3570 in fuzz_nvm_write_uncorrectable_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:582 00:08:28.895 NEW_FUNC[2/685]: 0x4bf580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:28.895 #24 NEW cov: 11857 ft: 11856 corp: 2/48b lim: 50 exec/s: 0 rss: 69Mb L: 47/47 MS: 2 ChangeByte-InsertRepeatedBytes- 00:08:28.895 [2024-06-07 22:59:20.924383] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:637534208 len:1 00:08:28.895 [2024-06-07 22:59:20.924435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:28.895 [2024-06-07 22:59:20.924593] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:08:28.895 [2024-06-07 22:59:20.924625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:28.895 [2024-06-07 22:59:20.924771] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:08:28.895 [2024-06-07 22:59:20.924803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:28.895 #25 NEW cov: 11987 ft: 12803 corp: 3/78b lim: 50 exec/s: 0 rss: 69Mb L: 30/47 MS: 1 CrossOver- 00:08:28.895 [2024-06-07 22:59:21.004046] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:940407953243835661 len:3342 00:08:28.895 [2024-06-07 22:59:21.004090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:28.895 #30 NEW cov: 11993 ft: 13346 corp: 4/90b lim: 50 exec/s: 0 rss: 69Mb L: 12/47 MS: 5 CrossOver-InsertRepeatedBytes-ShuffleBytes-ShuffleBytes-CopyPart- 00:08:28.895 [2024-06-07 22:59:21.064790] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:637534208 len:1 00:08:28.895 [2024-06-07 22:59:21.064828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:28.895 [2024-06-07 22:59:21.064873] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:08:28.895 [2024-06-07 22:59:21.064903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:28.895 [2024-06-07 22:59:21.065032] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:08:28.895 [2024-06-07 22:59:21.065055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:28.895 [2024-06-07 22:59:21.065185] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:777389080576 len:1 00:08:28.895 [2024-06-07 22:59:21.065214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:28.895 #31 NEW cov: 12078 ft: 13705 corp: 5/138b lim: 50 exec/s: 0 rss: 69Mb L: 48/48 MS: 1 InsertByte- 00:08:28.895 [2024-06-07 22:59:21.125001] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:637534208 len:1 00:08:28.895 [2024-06-07 22:59:21.125041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:28.895 [2024-06-07 22:59:21.125116] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:08:28.895 [2024-06-07 22:59:21.125140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:28.895 [2024-06-07 22:59:21.125264] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:08:28.895 [2024-06-07 22:59:21.125291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:28.895 [2024-06-07 22:59:21.125421] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:0 len:46337 00:08:28.895 [2024-06-07 22:59:21.125450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:29.154 #32 NEW cov: 12078 ft: 13807 corp: 6/187b lim: 50 exec/s: 0 rss: 69Mb L: 49/49 MS: 1 CrossOver- 00:08:29.154 [2024-06-07 22:59:21.205284] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:637534208 len:1 00:08:29.154 [2024-06-07 22:59:21.205326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:29.154 [2024-06-07 22:59:21.205405] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:08:29.154 [2024-06-07 22:59:21.205426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:29.154 [2024-06-07 22:59:21.205549] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:08:29.154 [2024-06-07 22:59:21.205581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:29.154 [2024-06-07 22:59:21.205707] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:2810246167479189504 len:1 00:08:29.155 [2024-06-07 22:59:21.205735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:29.155 #33 NEW cov: 12078 ft: 13995 corp: 7/235b lim: 50 exec/s: 0 rss: 69Mb L: 48/49 MS: 1 InsertByte- 00:08:29.155 [2024-06-07 22:59:21.265053] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:637534208 len:1 00:08:29.155 [2024-06-07 22:59:21.265096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:29.155 [2024-06-07 22:59:21.265147] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:08:29.155 [2024-06-07 22:59:21.265173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:29.155 NEW_FUNC[1/1]: 0x1a74020 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:601 00:08:29.155 #34 NEW cov: 12101 ft: 14248 corp: 8/256b lim: 50 exec/s: 0 rss: 69Mb L: 21/49 MS: 1 EraseBytes- 00:08:29.155 [2024-06-07 22:59:21.345743] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:637534208 len:1 00:08:29.155 [2024-06-07 22:59:21.345786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:29.155 [2024-06-07 22:59:21.345867] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:08:29.155 [2024-06-07 22:59:21.345893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:29.155 [2024-06-07 22:59:21.346018] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:08:29.155 [2024-06-07 22:59:21.346044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:29.155 [2024-06-07 22:59:21.346183] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:2810246167479189504 len:1 00:08:29.155 [2024-06-07 22:59:21.346211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:29.155 #35 NEW cov: 12101 ft: 14381 corp: 9/305b lim: 50 exec/s: 0 rss: 70Mb L: 49/49 MS: 1 InsertByte- 00:08:29.155 [2024-06-07 22:59:21.425896] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:637534208 len:1 00:08:29.155 [2024-06-07 22:59:21.425937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:29.155 [2024-06-07 22:59:21.426012] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:08:29.155 [2024-06-07 22:59:21.426042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:29.155 [2024-06-07 22:59:21.426171] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:08:29.155 [2024-06-07 22:59:21.426195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:29.155 [2024-06-07 22:59:21.426320] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:0 len:1 00:08:29.155 [2024-06-07 22:59:21.426351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:29.414 #36 NEW cov: 12101 ft: 14416 corp: 10/352b lim: 50 exec/s: 36 rss: 70Mb L: 47/49 MS: 1 CopyPart- 00:08:29.414 [2024-06-07 22:59:21.486133] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:637534208 len:1 00:08:29.414 [2024-06-07 22:59:21.486171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:29.414 [2024-06-07 22:59:21.486212] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:08:29.414 [2024-06-07 22:59:21.486244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:29.414 [2024-06-07 22:59:21.486376] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744069414584575 len:65536 00:08:29.414 [2024-06-07 22:59:21.486399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:29.414 [2024-06-07 22:59:21.486524] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18374686483966590975 len:1 00:08:29.414 [2024-06-07 22:59:21.486550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:29.414 #37 NEW cov: 12101 ft: 14510 corp: 11/394b lim: 50 exec/s: 37 rss: 70Mb L: 42/49 MS: 1 InsertRepeatedBytes- 00:08:29.414 [2024-06-07 22:59:21.546233] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:637534208 len:1 00:08:29.414 [2024-06-07 22:59:21.546274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:29.414 [2024-06-07 22:59:21.546310] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:08:29.414 [2024-06-07 22:59:21.546339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:29.414 [2024-06-07 22:59:21.546469] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:08:29.414 [2024-06-07 22:59:21.546493] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:29.414 [2024-06-07 22:59:21.546624] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:2810246167479189504 len:1 00:08:29.414 [2024-06-07 22:59:21.546652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:29.414 #38 NEW cov: 12101 ft: 14519 corp: 12/442b lim: 50 exec/s: 38 rss: 70Mb L: 48/49 MS: 1 CrossOver- 00:08:29.414 [2024-06-07 22:59:21.606401] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:637534208 len:1 00:08:29.414 [2024-06-07 22:59:21.606439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:29.414 [2024-06-07 22:59:21.606501] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:2 len:1 00:08:29.414 [2024-06-07 22:59:21.606530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:29.414 [2024-06-07 22:59:21.606658] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:08:29.414 [2024-06-07 22:59:21.606683] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:29.414 [2024-06-07 22:59:21.606805] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:0 len:1 00:08:29.414 [2024-06-07 22:59:21.606831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:29.414 #39 NEW cov: 12101 ft: 14534 corp: 13/490b lim: 50 exec/s: 39 rss: 70Mb L: 48/49 MS: 1 InsertByte- 00:08:29.414 [2024-06-07 22:59:21.686889] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:637534208 len:1 00:08:29.414 [2024-06-07 22:59:21.686931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:29.414 [2024-06-07 22:59:21.687018] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:237 00:08:29.414 [2024-06-07 22:59:21.687044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:29.414 [2024-06-07 22:59:21.687178] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:08:29.415 [2024-06-07 22:59:21.687209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:29.415 [2024-06-07 22:59:21.687344] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:0 len:182 00:08:29.415 [2024-06-07 22:59:21.687369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:29.415 [2024-06-07 22:59:21.687501] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:4 nsid:0 lba:0 len:1 00:08:29.415 [2024-06-07 22:59:21.687529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:29.674 #45 NEW cov: 12101 ft: 14573 corp: 14/540b lim: 50 exec/s: 45 rss: 70Mb L: 50/50 MS: 1 InsertByte- 00:08:29.674 [2024-06-07 22:59:21.766698] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:637558784 len:1 00:08:29.674 [2024-06-07 22:59:21.766738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:29.674 [2024-06-07 22:59:21.766783] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:08:29.674 [2024-06-07 22:59:21.766813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:29.674 [2024-06-07 22:59:21.766944] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:08:29.674 [2024-06-07 22:59:21.766970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:29.674 #46 NEW cov: 12101 ft: 14616 corp: 15/571b lim: 50 exec/s: 46 rss: 70Mb L: 31/50 MS: 1 InsertByte- 00:08:29.674 [2024-06-07 22:59:21.826508] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:1 00:08:29.674 [2024-06-07 22:59:21.826553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:29.674 #47 NEW cov: 12101 ft: 14647 corp: 16/585b lim: 50 exec/s: 47 rss: 70Mb L: 14/50 MS: 1 EraseBytes- 00:08:29.674 [2024-06-07 22:59:21.907339] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:637534208 len:1 00:08:29.674 [2024-06-07 22:59:21.907379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:29.674 [2024-06-07 22:59:21.907440] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:08:29.674 [2024-06-07 22:59:21.907461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:29.674 [2024-06-07 22:59:21.907590] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:08:29.674 [2024-06-07 22:59:21.907614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:29.674 [2024-06-07 22:59:21.907756] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:0 len:46337 00:08:29.674 [2024-06-07 22:59:21.907780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:29.674 #48 NEW cov: 12101 ft: 14696 corp: 17/634b lim: 50 exec/s: 48 rss: 70Mb L: 49/50 MS: 1 ShuffleBytes- 00:08:29.934 [2024-06-07 22:59:21.967372] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:637534208 len:1 00:08:29.934 [2024-06-07 22:59:21.967412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:29.934 [2024-06-07 22:59:21.967451] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:08:29.934 [2024-06-07 22:59:21.967479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:29.934 [2024-06-07 22:59:21.967609] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:08:29.934 [2024-06-07 22:59:21.967638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:29.934 #49 NEW cov: 12101 ft: 14712 corp: 18/664b lim: 50 exec/s: 49 rss: 70Mb L: 30/50 MS: 1 ShuffleBytes- 00:08:29.934 [2024-06-07 22:59:22.027915] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:637534208 len:1 00:08:29.934 [2024-06-07 22:59:22.027955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:29.934 [2024-06-07 22:59:22.028052] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:237 00:08:29.934 [2024-06-07 22:59:22.028081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:29.934 [2024-06-07 22:59:22.028204] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:08:29.934 [2024-06-07 22:59:22.028232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:29.934 [2024-06-07 22:59:22.028357] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:0 len:1 00:08:29.934 [2024-06-07 22:59:22.028391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:29.934 [2024-06-07 22:59:22.028523] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:4 nsid:0 lba:3036676096 len:1 00:08:29.934 [2024-06-07 22:59:22.028554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:29.934 #50 NEW cov: 12101 ft: 14755 corp: 19/714b lim: 50 exec/s: 50 rss: 70Mb L: 50/50 MS: 1 CopyPart- 00:08:29.934 [2024-06-07 22:59:22.107397] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:4259840 len:1 00:08:29.934 [2024-06-07 22:59:22.107438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:29.934 #51 NEW cov: 12101 ft: 14793 corp: 20/729b lim: 50 exec/s: 51 rss: 70Mb L: 15/50 MS: 1 InsertByte- 00:08:29.934 [2024-06-07 22:59:22.188243] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:637534208 len:1 00:08:29.934 [2024-06-07 22:59:22.188286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:29.934 [2024-06-07 22:59:22.188349] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:08:29.934 [2024-06-07 22:59:22.188376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:29.934 [2024-06-07 22:59:22.188500] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:1039382085632 len:1 00:08:29.934 [2024-06-07 22:59:22.188530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:29.934 [2024-06-07 22:59:22.188658] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:0 len:1 00:08:29.934 [2024-06-07 22:59:22.188684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:30.193 #52 NEW cov: 12101 ft: 14818 corp: 21/776b lim: 50 exec/s: 52 rss: 70Mb L: 47/50 MS: 1 ChangeByte- 00:08:30.193 [2024-06-07 22:59:22.248644] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:637534208 len:1 00:08:30.194 [2024-06-07 22:59:22.248687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:30.194 [2024-06-07 22:59:22.248778] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:08:30.194 [2024-06-07 22:59:22.248805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:30.194 [2024-06-07 22:59:22.248937] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:259484744155136 len:1 00:08:30.194 [2024-06-07 22:59:22.248965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:30.194 [2024-06-07 22:59:22.249099] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:0 len:182 00:08:30.194 [2024-06-07 22:59:22.249128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:30.194 [2024-06-07 22:59:22.249260] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:4 nsid:0 lba:0 len:1 00:08:30.194 [2024-06-07 22:59:22.249286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:30.194 #53 NEW cov: 12101 ft: 14827 corp: 22/826b lim: 50 exec/s: 53 rss: 70Mb L: 50/50 MS: 1 CopyPart- 00:08:30.194 [2024-06-07 22:59:22.308334] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:637534208 len:1 00:08:30.194 [2024-06-07 22:59:22.308372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:30.194 [2024-06-07 22:59:22.308425] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:08:30.194 [2024-06-07 22:59:22.308452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:30.194 [2024-06-07 22:59:22.308584] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:08:30.194 [2024-06-07 22:59:22.308607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:30.194 #54 NEW cov: 12101 ft: 14874 corp: 23/859b lim: 50 exec/s: 54 rss: 70Mb L: 33/50 MS: 1 CrossOver- 00:08:30.194 [2024-06-07 22:59:22.368830] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:637534208 len:1 00:08:30.194 [2024-06-07 22:59:22.368869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:30.194 [2024-06-07 22:59:22.368924] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:08:30.194 [2024-06-07 22:59:22.368954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:30.194 [2024-06-07 22:59:22.369079] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:08:30.194 [2024-06-07 22:59:22.369104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:30.194 [2024-06-07 22:59:22.369234] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:10977524091715584 len:1 00:08:30.194 [2024-06-07 22:59:22.369257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:30.194 #55 NEW cov: 12101 ft: 14916 corp: 24/908b lim: 50 exec/s: 55 rss: 71Mb L: 49/50 MS: 1 CopyPart- 00:08:30.194 [2024-06-07 22:59:22.448345] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18374968010701868543 len:3342 00:08:30.194 [2024-06-07 22:59:22.448388] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:30.454 #56 NEW cov: 12101 ft: 14917 corp: 25/920b lim: 50 exec/s: 28 rss: 71Mb L: 12/50 MS: 1 CMP- DE: "\377\377\001\000"- 00:08:30.454 #56 DONE cov: 12101 ft: 14917 corp: 25/920b lim: 50 exec/s: 28 rss: 71Mb 00:08:30.454 ###### Recommended dictionary. ###### 00:08:30.454 "\377\377\001\000" # Uses: 0 00:08:30.454 ###### End of recommended dictionary. ###### 00:08:30.454 Done 56 runs in 2 second(s) 00:08:30.454 22:59:22 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_19.conf /var/tmp/suppress_nvmf_fuzz 00:08:30.454 22:59:22 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:30.454 22:59:22 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:30.454 22:59:22 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 20 1 0x1 00:08:30.454 22:59:22 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=20 00:08:30.454 22:59:22 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:30.454 22:59:22 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:30.454 22:59:22 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:08:30.454 22:59:22 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_20.conf 00:08:30.454 22:59:22 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:30.454 22:59:22 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:30.454 22:59:22 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 20 00:08:30.454 22:59:22 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4420 00:08:30.454 22:59:22 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:08:30.454 22:59:22 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' 00:08:30.454 22:59:22 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4420"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:30.454 22:59:22 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:30.454 22:59:22 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:30.454 22:59:22 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' -c /tmp/fuzz_json_20.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 -Z 20 00:08:30.454 [2024-06-07 22:59:22.682883] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:08:30.454 [2024-06-07 22:59:22.682958] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4158447 ] 00:08:30.713 EAL: No free 2048 kB hugepages reported on node 1 00:08:30.713 [2024-06-07 22:59:22.925207] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.972 [2024-06-07 22:59:23.007484] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.972 [2024-06-07 22:59:23.069861] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:30.972 [2024-06-07 22:59:23.086175] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:08:30.972 INFO: Running with entropic power schedule (0xFF, 100). 00:08:30.972 INFO: Seed: 406788112 00:08:30.972 INFO: Loaded 1 modules (357552 inline 8-bit counters): 357552 [0x29a2fcc, 0x29fa47c), 00:08:30.972 INFO: Loaded 1 PC tables (357552 PCs): 357552 [0x29fa480,0x2f6ef80), 00:08:30.972 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:08:30.972 INFO: A corpus is not provided, starting from an empty corpus 00:08:30.972 #2 INITED exec/s: 0 rss: 63Mb 00:08:30.972 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:30.972 This may also happen if the target rejected all inputs we tried so far 00:08:30.972 [2024-06-07 22:59:23.151714] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:30.972 [2024-06-07 22:59:23.151754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:30.972 [2024-06-07 22:59:23.151808] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:30.972 [2024-06-07 22:59:23.151829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:31.540 NEW_FUNC[1/687]: 0x4a5130 in fuzz_nvm_reservation_acquire_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:597 00:08:31.540 NEW_FUNC[2/687]: 0x4bf580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:31.540 #19 NEW cov: 11915 ft: 11910 corp: 2/53b lim: 90 exec/s: 0 rss: 70Mb L: 52/52 MS: 2 InsertByte-InsertRepeatedBytes- 00:08:31.540 [2024-06-07 22:59:23.602823] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:31.540 [2024-06-07 22:59:23.602882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:31.540 [2024-06-07 22:59:23.602964] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:31.540 [2024-06-07 22:59:23.602994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:31.540 #20 NEW cov: 12045 ft: 12509 corp: 3/106b lim: 90 exec/s: 0 rss: 70Mb L: 53/53 MS: 1 InsertByte- 00:08:31.540 [2024-06-07 22:59:23.672934] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:31.540 [2024-06-07 22:59:23.672970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:31.540 [2024-06-07 22:59:23.673003] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:31.540 [2024-06-07 22:59:23.673022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:31.540 #23 NEW cov: 12051 ft: 12930 corp: 4/149b lim: 90 exec/s: 0 rss: 70Mb L: 43/53 MS: 3 ShuffleBytes-CopyPart-InsertRepeatedBytes- 00:08:31.540 [2024-06-07 22:59:23.723209] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:31.540 [2024-06-07 22:59:23.723248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:31.540 [2024-06-07 22:59:23.723283] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:31.540 [2024-06-07 22:59:23.723303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:31.540 [2024-06-07 22:59:23.723365] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:31.540 [2024-06-07 22:59:23.723386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:31.540 #24 NEW cov: 12136 ft: 13492 corp: 5/203b lim: 90 exec/s: 0 rss: 70Mb L: 54/54 MS: 1 InsertByte- 00:08:31.540 [2024-06-07 22:59:23.793262] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:31.540 [2024-06-07 22:59:23.793295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:31.540 [2024-06-07 22:59:23.793330] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:31.540 [2024-06-07 22:59:23.793350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:31.799 #25 NEW cov: 12136 ft: 13598 corp: 6/242b lim: 90 exec/s: 0 rss: 70Mb L: 39/54 MS: 1 EraseBytes- 00:08:31.799 [2024-06-07 22:59:23.863588] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:31.799 [2024-06-07 22:59:23.863623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:31.799 [2024-06-07 22:59:23.863680] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:31.799 [2024-06-07 22:59:23.863700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:31.799 [2024-06-07 22:59:23.863763] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:31.800 [2024-06-07 22:59:23.863783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:31.800 #26 NEW cov: 12136 ft: 13775 corp: 7/296b lim: 90 exec/s: 0 rss: 70Mb L: 54/54 MS: 1 ShuffleBytes- 00:08:31.800 [2024-06-07 22:59:23.913754] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:31.800 [2024-06-07 22:59:23.913788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:31.800 [2024-06-07 22:59:23.913842] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:31.800 [2024-06-07 22:59:23.913862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:31.800 [2024-06-07 22:59:23.913923] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:31.800 [2024-06-07 22:59:23.913942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:31.800 #27 NEW cov: 12136 ft: 13891 corp: 8/350b lim: 90 exec/s: 0 rss: 70Mb L: 54/54 MS: 1 ChangeBinInt- 00:08:31.800 [2024-06-07 22:59:23.963710] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:31.800 [2024-06-07 22:59:23.963743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:31.800 [2024-06-07 22:59:23.963786] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:31.800 [2024-06-07 22:59:23.963806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:31.800 #28 NEW cov: 12136 ft: 13979 corp: 9/403b lim: 90 exec/s: 0 rss: 70Mb L: 53/54 MS: 1 ShuffleBytes- 00:08:31.800 [2024-06-07 22:59:24.014182] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:31.800 [2024-06-07 22:59:24.014215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:31.800 [2024-06-07 22:59:24.014275] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:31.800 [2024-06-07 22:59:24.014294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:31.800 [2024-06-07 22:59:24.014356] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:31.800 [2024-06-07 22:59:24.014376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:31.800 [2024-06-07 22:59:24.014439] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:31.800 [2024-06-07 22:59:24.014460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:31.800 NEW_FUNC[1/1]: 0x1a74020 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:601 00:08:31.800 #29 NEW cov: 12159 ft: 14370 corp: 10/476b lim: 90 exec/s: 0 rss: 71Mb L: 73/73 MS: 1 CrossOver- 00:08:32.059 [2024-06-07 22:59:24.083869] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:32.059 [2024-06-07 22:59:24.083903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.059 #30 NEW cov: 12159 ft: 15180 corp: 11/511b lim: 90 exec/s: 30 rss: 71Mb L: 35/73 MS: 1 EraseBytes- 00:08:32.059 [2024-06-07 22:59:24.154270] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:32.059 [2024-06-07 22:59:24.154306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.059 [2024-06-07 22:59:24.154341] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:32.059 [2024-06-07 22:59:24.154361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:32.059 #31 NEW cov: 12159 ft: 15213 corp: 12/563b lim: 90 exec/s: 31 rss: 71Mb L: 52/73 MS: 1 ChangeByte- 00:08:32.059 [2024-06-07 22:59:24.204375] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:32.059 [2024-06-07 22:59:24.204410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.059 [2024-06-07 22:59:24.204444] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:32.059 [2024-06-07 22:59:24.204464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:32.059 #32 NEW cov: 12159 ft: 15223 corp: 13/605b lim: 90 exec/s: 32 rss: 71Mb L: 42/73 MS: 1 EraseBytes- 00:08:32.059 [2024-06-07 22:59:24.244482] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:32.060 [2024-06-07 22:59:24.244516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.060 [2024-06-07 22:59:24.244559] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:32.060 [2024-06-07 22:59:24.244582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:32.060 #33 NEW cov: 12159 ft: 15232 corp: 14/650b lim: 90 exec/s: 33 rss: 71Mb L: 45/73 MS: 1 EraseBytes- 00:08:32.060 [2024-06-07 22:59:24.295026] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:32.060 [2024-06-07 22:59:24.295065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.060 [2024-06-07 22:59:24.295110] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:32.060 [2024-06-07 22:59:24.295130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:32.060 [2024-06-07 22:59:24.295191] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:32.060 [2024-06-07 22:59:24.295211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:32.060 [2024-06-07 22:59:24.295273] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:32.060 [2024-06-07 22:59:24.295293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:32.319 #34 NEW cov: 12159 ft: 15295 corp: 15/735b lim: 90 exec/s: 34 rss: 71Mb L: 85/85 MS: 1 InsertRepeatedBytes- 00:08:32.319 [2024-06-07 22:59:24.365023] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:32.319 [2024-06-07 22:59:24.365057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.319 [2024-06-07 22:59:24.365104] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:32.319 [2024-06-07 22:59:24.365124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:32.319 [2024-06-07 22:59:24.365186] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:32.319 [2024-06-07 22:59:24.365206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:32.319 #35 NEW cov: 12159 ft: 15317 corp: 16/789b lim: 90 exec/s: 35 rss: 71Mb L: 54/85 MS: 1 ShuffleBytes- 00:08:32.319 [2024-06-07 22:59:24.435382] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:32.319 [2024-06-07 22:59:24.435414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.319 [2024-06-07 22:59:24.435479] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:32.319 [2024-06-07 22:59:24.435499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:32.319 [2024-06-07 22:59:24.435558] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:32.319 [2024-06-07 22:59:24.435582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:32.319 [2024-06-07 22:59:24.435644] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:32.319 [2024-06-07 22:59:24.435665] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:32.319 #36 NEW cov: 12159 ft: 15330 corp: 17/862b lim: 90 exec/s: 36 rss: 71Mb L: 73/85 MS: 1 ChangeByte- 00:08:32.319 [2024-06-07 22:59:24.505422] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:32.319 [2024-06-07 22:59:24.505455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.319 [2024-06-07 22:59:24.505495] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:32.319 [2024-06-07 22:59:24.505515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:32.319 [2024-06-07 22:59:24.505592] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:32.319 [2024-06-07 22:59:24.505613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:32.319 #37 NEW cov: 12159 ft: 15347 corp: 18/916b lim: 90 exec/s: 37 rss: 71Mb L: 54/85 MS: 1 ChangeBit- 00:08:32.319 [2024-06-07 22:59:24.555243] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:32.319 [2024-06-07 22:59:24.555277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.578 #38 NEW cov: 12159 ft: 15421 corp: 19/950b lim: 90 exec/s: 38 rss: 71Mb L: 34/85 MS: 1 EraseBytes- 00:08:32.578 [2024-06-07 22:59:24.625944] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:32.578 [2024-06-07 22:59:24.625979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.578 [2024-06-07 22:59:24.626038] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:32.578 [2024-06-07 22:59:24.626059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:32.578 [2024-06-07 22:59:24.626116] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:32.578 [2024-06-07 22:59:24.626135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:32.578 [2024-06-07 22:59:24.626197] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:32.578 [2024-06-07 22:59:24.626218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:32.578 #39 NEW cov: 12159 ft: 15448 corp: 20/1036b lim: 90 exec/s: 39 rss: 71Mb L: 86/86 MS: 1 InsertByte- 00:08:32.578 [2024-06-07 22:59:24.695774] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:32.578 [2024-06-07 22:59:24.695807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.578 [2024-06-07 22:59:24.695848] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:32.578 [2024-06-07 22:59:24.695870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:32.578 #40 NEW cov: 12159 ft: 15461 corp: 21/1085b lim: 90 exec/s: 40 rss: 72Mb L: 49/86 MS: 1 InsertRepeatedBytes- 00:08:32.578 [2024-06-07 22:59:24.745720] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:32.578 [2024-06-07 22:59:24.745754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.578 #41 NEW cov: 12159 ft: 15489 corp: 22/1120b lim: 90 exec/s: 41 rss: 72Mb L: 35/86 MS: 1 InsertByte- 00:08:32.578 [2024-06-07 22:59:24.816473] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:32.578 [2024-06-07 22:59:24.816507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.578 [2024-06-07 22:59:24.816568] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:32.578 [2024-06-07 22:59:24.816596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:32.578 [2024-06-07 22:59:24.816656] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:32.578 [2024-06-07 22:59:24.816676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:32.578 [2024-06-07 22:59:24.816738] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:32.578 [2024-06-07 22:59:24.816763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:32.578 #42 NEW cov: 12159 ft: 15509 corp: 23/1208b lim: 90 exec/s: 42 rss: 72Mb L: 88/88 MS: 1 InsertRepeatedBytes- 00:08:32.838 [2024-06-07 22:59:24.866596] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:32.838 [2024-06-07 22:59:24.866630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.838 [2024-06-07 22:59:24.866690] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:32.838 [2024-06-07 22:59:24.866710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:32.838 [2024-06-07 22:59:24.866770] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:32.838 [2024-06-07 22:59:24.866790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:32.838 [2024-06-07 22:59:24.866852] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:32.838 [2024-06-07 22:59:24.866872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:32.838 #43 NEW cov: 12159 ft: 15524 corp: 24/1294b lim: 90 exec/s: 43 rss: 72Mb L: 86/88 MS: 1 InsertByte- 00:08:32.838 [2024-06-07 22:59:24.916411] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:32.838 [2024-06-07 22:59:24.916445] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.838 [2024-06-07 22:59:24.916481] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:32.838 [2024-06-07 22:59:24.916501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:32.838 #44 NEW cov: 12159 ft: 15539 corp: 25/1343b lim: 90 exec/s: 44 rss: 72Mb L: 49/88 MS: 1 CopyPart- 00:08:32.838 [2024-06-07 22:59:24.986752] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:32.838 [2024-06-07 22:59:24.986787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.838 [2024-06-07 22:59:24.986829] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:32.838 [2024-06-07 22:59:24.986849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:32.838 [2024-06-07 22:59:24.986910] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:32.838 [2024-06-07 22:59:24.986931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:32.838 #45 NEW cov: 12159 ft: 15551 corp: 26/1414b lim: 90 exec/s: 45 rss: 72Mb L: 71/88 MS: 1 CopyPart- 00:08:32.838 [2024-06-07 22:59:25.057010] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:32.838 [2024-06-07 22:59:25.057045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.838 [2024-06-07 22:59:25.057096] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:32.838 [2024-06-07 22:59:25.057115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:32.838 [2024-06-07 22:59:25.057177] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:32.838 [2024-06-07 22:59:25.057198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:32.838 #46 NEW cov: 12159 ft: 15567 corp: 27/1484b lim: 90 exec/s: 46 rss: 72Mb L: 70/88 MS: 1 EraseBytes- 00:08:33.098 [2024-06-07 22:59:25.127200] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:33.098 [2024-06-07 22:59:25.127234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:33.098 [2024-06-07 22:59:25.127275] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:33.098 [2024-06-07 22:59:25.127295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:33.098 [2024-06-07 22:59:25.127357] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:33.098 [2024-06-07 22:59:25.127377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:33.098 #52 NEW cov: 12159 ft: 15594 corp: 28/1549b lim: 90 exec/s: 26 rss: 72Mb L: 65/88 MS: 1 CrossOver- 00:08:33.098 #52 DONE cov: 12159 ft: 15594 corp: 28/1549b lim: 90 exec/s: 26 rss: 72Mb 00:08:33.098 Done 52 runs in 2 second(s) 00:08:33.098 22:59:25 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_20.conf /var/tmp/suppress_nvmf_fuzz 00:08:33.098 22:59:25 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:33.098 22:59:25 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:33.098 22:59:25 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 21 1 0x1 00:08:33.098 22:59:25 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=21 00:08:33.098 22:59:25 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:33.098 22:59:25 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:33.098 22:59:25 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:08:33.098 22:59:25 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_21.conf 00:08:33.098 22:59:25 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:33.098 22:59:25 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:33.098 22:59:25 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 21 00:08:33.098 22:59:25 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4421 00:08:33.098 22:59:25 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:08:33.098 22:59:25 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' 00:08:33.098 22:59:25 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4421"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:33.098 22:59:25 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:33.098 22:59:25 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:33.098 22:59:25 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' -c /tmp/fuzz_json_21.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 -Z 21 00:08:33.098 [2024-06-07 22:59:25.343559] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:08:33.098 [2024-06-07 22:59:25.343647] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4158914 ] 00:08:33.357 EAL: No free 2048 kB hugepages reported on node 1 00:08:33.357 [2024-06-07 22:59:25.580905] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.617 [2024-06-07 22:59:25.660361] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.617 [2024-06-07 22:59:25.722731] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:33.617 [2024-06-07 22:59:25.739100] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4421 *** 00:08:33.617 INFO: Running with entropic power schedule (0xFF, 100). 00:08:33.617 INFO: Seed: 3058785771 00:08:33.617 INFO: Loaded 1 modules (357552 inline 8-bit counters): 357552 [0x29a2fcc, 0x29fa47c), 00:08:33.617 INFO: Loaded 1 PC tables (357552 PCs): 357552 [0x29fa480,0x2f6ef80), 00:08:33.617 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:08:33.617 INFO: A corpus is not provided, starting from an empty corpus 00:08:33.617 #2 INITED exec/s: 0 rss: 63Mb 00:08:33.617 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:33.617 This may also happen if the target rejected all inputs we tried so far 00:08:33.617 [2024-06-07 22:59:25.794910] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:33.617 [2024-06-07 22:59:25.794948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:33.617 [2024-06-07 22:59:25.794993] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:33.617 [2024-06-07 22:59:25.795013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:33.617 [2024-06-07 22:59:25.795077] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:33.617 [2024-06-07 22:59:25.795096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:33.617 [2024-06-07 22:59:25.795159] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:33.617 [2024-06-07 22:59:25.795179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:34.185 NEW_FUNC[1/686]: 0x4a8350 in fuzz_nvm_reservation_release_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:623 00:08:34.185 NEW_FUNC[2/686]: 0x4bf580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:34.185 #19 NEW cov: 11860 ft: 11888 corp: 2/48b lim: 50 exec/s: 0 rss: 70Mb L: 47/47 MS: 2 ChangeBinInt-InsertRepeatedBytes- 00:08:34.185 [2024-06-07 22:59:26.245693] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:34.185 [2024-06-07 22:59:26.245735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:34.185 [2024-06-07 22:59:26.245783] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:34.185 [2024-06-07 22:59:26.245805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:34.185 NEW_FUNC[1/1]: 0x1dda5e0 in thread_execute_poller /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:938 00:08:34.185 #20 NEW cov: 12020 ft: 12932 corp: 3/77b lim: 50 exec/s: 0 rss: 71Mb L: 29/47 MS: 1 InsertRepeatedBytes- 00:08:34.185 [2024-06-07 22:59:26.306077] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:34.185 [2024-06-07 22:59:26.306112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:34.185 [2024-06-07 22:59:26.306168] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:34.185 [2024-06-07 22:59:26.306189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:34.185 [2024-06-07 22:59:26.306252] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:34.185 [2024-06-07 22:59:26.306272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:34.185 [2024-06-07 22:59:26.306338] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:34.185 [2024-06-07 22:59:26.306358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:34.185 #21 NEW cov: 12026 ft: 13116 corp: 4/124b lim: 50 exec/s: 0 rss: 71Mb L: 47/47 MS: 1 ChangeBit- 00:08:34.185 [2024-06-07 22:59:26.376307] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:34.185 [2024-06-07 22:59:26.376342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:34.185 [2024-06-07 22:59:26.376391] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:34.185 [2024-06-07 22:59:26.376411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:34.185 [2024-06-07 22:59:26.376474] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:34.185 [2024-06-07 22:59:26.376494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:34.185 [2024-06-07 22:59:26.376557] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:34.185 [2024-06-07 22:59:26.376583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:34.185 #22 NEW cov: 12111 ft: 13357 corp: 5/171b lim: 50 exec/s: 0 rss: 71Mb L: 47/47 MS: 1 ChangeByte- 00:08:34.185 [2024-06-07 22:59:26.426481] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:34.185 [2024-06-07 22:59:26.426516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:34.185 [2024-06-07 22:59:26.426573] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:34.185 [2024-06-07 22:59:26.426601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:34.185 [2024-06-07 22:59:26.426664] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:34.185 [2024-06-07 22:59:26.426683] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:34.185 [2024-06-07 22:59:26.426745] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:34.185 [2024-06-07 22:59:26.426766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:34.444 #23 NEW cov: 12111 ft: 13515 corp: 6/218b lim: 50 exec/s: 0 rss: 71Mb L: 47/47 MS: 1 CopyPart- 00:08:34.444 [2024-06-07 22:59:26.496619] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:34.444 [2024-06-07 22:59:26.496656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:34.444 [2024-06-07 22:59:26.496712] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:34.444 [2024-06-07 22:59:26.496733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:34.444 [2024-06-07 22:59:26.496796] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:34.444 [2024-06-07 22:59:26.496816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:34.444 [2024-06-07 22:59:26.496879] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:34.444 [2024-06-07 22:59:26.496904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:34.444 #24 NEW cov: 12111 ft: 13590 corp: 7/265b lim: 50 exec/s: 0 rss: 71Mb L: 47/47 MS: 1 ChangeBit- 00:08:34.445 [2024-06-07 22:59:26.536751] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:34.445 [2024-06-07 22:59:26.536786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:34.445 [2024-06-07 22:59:26.536845] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:34.445 [2024-06-07 22:59:26.536865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:34.445 [2024-06-07 22:59:26.536926] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:34.445 [2024-06-07 22:59:26.536946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:34.445 [2024-06-07 22:59:26.537009] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:34.445 [2024-06-07 22:59:26.537028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:34.445 #25 NEW cov: 12111 ft: 13722 corp: 8/312b lim: 50 exec/s: 0 rss: 71Mb L: 47/47 MS: 1 ShuffleBytes- 00:08:34.445 [2024-06-07 22:59:26.586530] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:34.445 [2024-06-07 22:59:26.586566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:34.445 [2024-06-07 22:59:26.586602] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:34.445 [2024-06-07 22:59:26.586623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:34.445 #26 NEW cov: 12111 ft: 13826 corp: 9/338b lim: 50 exec/s: 0 rss: 71Mb L: 26/47 MS: 1 EraseBytes- 00:08:34.445 [2024-06-07 22:59:26.657029] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:34.445 [2024-06-07 22:59:26.657066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:34.445 [2024-06-07 22:59:26.657121] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:34.445 [2024-06-07 22:59:26.657142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:34.445 [2024-06-07 22:59:26.657205] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:34.445 [2024-06-07 22:59:26.657225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:34.445 [2024-06-07 22:59:26.657288] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:34.445 [2024-06-07 22:59:26.657308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:34.445 NEW_FUNC[1/1]: 0x1a74020 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:601 00:08:34.445 #27 NEW cov: 12134 ft: 13869 corp: 10/385b lim: 50 exec/s: 0 rss: 72Mb L: 47/47 MS: 1 ChangeBinInt- 00:08:34.703 [2024-06-07 22:59:26.727263] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:34.703 [2024-06-07 22:59:26.727299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:34.703 [2024-06-07 22:59:26.727355] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:34.703 [2024-06-07 22:59:26.727375] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:34.703 [2024-06-07 22:59:26.727442] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:34.703 [2024-06-07 22:59:26.727463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:34.703 [2024-06-07 22:59:26.727527] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:34.703 [2024-06-07 22:59:26.727548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:34.703 #28 NEW cov: 12134 ft: 13905 corp: 11/432b lim: 50 exec/s: 0 rss: 72Mb L: 47/47 MS: 1 ChangeByte- 00:08:34.703 [2024-06-07 22:59:26.767408] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:34.703 [2024-06-07 22:59:26.767443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:34.703 [2024-06-07 22:59:26.767500] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:34.704 [2024-06-07 22:59:26.767519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:34.704 [2024-06-07 22:59:26.767585] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:34.704 [2024-06-07 22:59:26.767606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:34.704 [2024-06-07 22:59:26.767671] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:34.704 [2024-06-07 22:59:26.767692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:34.704 #29 NEW cov: 12134 ft: 13926 corp: 12/481b lim: 50 exec/s: 29 rss: 72Mb L: 49/49 MS: 1 CrossOver- 00:08:34.704 [2024-06-07 22:59:26.817521] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:34.704 [2024-06-07 22:59:26.817556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:34.704 [2024-06-07 22:59:26.817619] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:34.704 [2024-06-07 22:59:26.817639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:34.704 [2024-06-07 22:59:26.817701] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:34.704 [2024-06-07 22:59:26.817720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:34.704 [2024-06-07 22:59:26.817783] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:34.704 [2024-06-07 22:59:26.817804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:34.704 #30 NEW cov: 12134 ft: 13953 corp: 13/528b lim: 50 exec/s: 30 rss: 72Mb L: 47/49 MS: 1 ShuffleBytes- 00:08:34.704 [2024-06-07 22:59:26.867485] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:34.704 [2024-06-07 22:59:26.867520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:34.704 [2024-06-07 22:59:26.867558] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:34.704 [2024-06-07 22:59:26.867583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:34.704 [2024-06-07 22:59:26.867647] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:34.704 [2024-06-07 22:59:26.867670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:34.704 #31 NEW cov: 12134 ft: 14189 corp: 14/567b lim: 50 exec/s: 31 rss: 72Mb L: 39/49 MS: 1 EraseBytes- 00:08:34.704 [2024-06-07 22:59:26.937734] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:34.704 [2024-06-07 22:59:26.937769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:34.704 [2024-06-07 22:59:26.937806] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:34.704 [2024-06-07 22:59:26.937826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:34.704 [2024-06-07 22:59:26.937889] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:34.704 [2024-06-07 22:59:26.937909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:34.962 #32 NEW cov: 12134 ft: 14211 corp: 15/606b lim: 50 exec/s: 32 rss: 72Mb L: 39/49 MS: 1 ChangeBit- 00:08:34.962 [2024-06-07 22:59:27.008093] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:34.962 [2024-06-07 22:59:27.008128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:34.962 [2024-06-07 22:59:27.008181] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:34.962 [2024-06-07 22:59:27.008201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:34.962 [2024-06-07 22:59:27.008263] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:34.962 [2024-06-07 22:59:27.008282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:34.963 [2024-06-07 22:59:27.008344] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:34.963 [2024-06-07 22:59:27.008364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:34.963 #33 NEW cov: 12134 ft: 14255 corp: 16/653b lim: 50 exec/s: 33 rss: 72Mb L: 47/49 MS: 1 ChangeBinInt- 00:08:34.963 [2024-06-07 22:59:27.058158] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:34.963 [2024-06-07 22:59:27.058192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:34.963 [2024-06-07 22:59:27.058250] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:34.963 [2024-06-07 22:59:27.058270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:34.963 [2024-06-07 22:59:27.058332] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:34.963 [2024-06-07 22:59:27.058351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:34.963 [2024-06-07 22:59:27.058414] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:34.963 [2024-06-07 22:59:27.058434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:34.963 #34 NEW cov: 12134 ft: 14287 corp: 17/699b lim: 50 exec/s: 34 rss: 72Mb L: 46/49 MS: 1 EraseBytes- 00:08:34.963 [2024-06-07 22:59:27.128415] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:34.963 [2024-06-07 22:59:27.128450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:34.963 [2024-06-07 22:59:27.128502] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:34.963 [2024-06-07 22:59:27.128522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:34.963 [2024-06-07 22:59:27.128590] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:34.963 [2024-06-07 22:59:27.128611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:34.963 [2024-06-07 22:59:27.128673] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:34.963 [2024-06-07 22:59:27.128694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:34.963 #35 NEW cov: 12134 ft: 14322 corp: 18/746b lim: 50 exec/s: 35 rss: 72Mb L: 47/49 MS: 1 CopyPart- 00:08:34.963 [2024-06-07 22:59:27.178184] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:34.963 [2024-06-07 22:59:27.178218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:34.963 [2024-06-07 22:59:27.178257] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:34.963 [2024-06-07 22:59:27.178278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:34.963 #36 NEW cov: 12134 ft: 14368 corp: 19/770b lim: 50 exec/s: 36 rss: 72Mb L: 24/49 MS: 1 EraseBytes- 00:08:35.222 [2024-06-07 22:59:27.248755] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:35.222 [2024-06-07 22:59:27.248790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:35.222 [2024-06-07 22:59:27.248849] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:35.222 [2024-06-07 22:59:27.248870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:35.222 [2024-06-07 22:59:27.248932] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:35.222 [2024-06-07 22:59:27.248950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:35.222 [2024-06-07 22:59:27.249016] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:35.222 [2024-06-07 22:59:27.249035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:35.222 #37 NEW cov: 12134 ft: 14385 corp: 20/817b lim: 50 exec/s: 37 rss: 72Mb L: 47/49 MS: 1 ChangeBinInt- 00:08:35.222 [2024-06-07 22:59:27.298928] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:35.222 [2024-06-07 22:59:27.298963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:35.222 [2024-06-07 22:59:27.299023] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:35.222 [2024-06-07 22:59:27.299042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:35.222 [2024-06-07 22:59:27.299105] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:35.222 [2024-06-07 22:59:27.299123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:35.222 [2024-06-07 22:59:27.299187] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:35.222 [2024-06-07 22:59:27.299207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:35.222 #38 NEW cov: 12134 ft: 14498 corp: 21/864b lim: 50 exec/s: 38 rss: 72Mb L: 47/49 MS: 1 InsertByte- 00:08:35.222 [2024-06-07 22:59:27.369079] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:35.222 [2024-06-07 22:59:27.369113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:35.222 [2024-06-07 22:59:27.369171] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:35.222 [2024-06-07 22:59:27.369191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:35.222 [2024-06-07 22:59:27.369254] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:35.222 [2024-06-07 22:59:27.369276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:35.222 [2024-06-07 22:59:27.369339] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:35.222 [2024-06-07 22:59:27.369359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:35.222 #39 NEW cov: 12134 ft: 14507 corp: 22/911b lim: 50 exec/s: 39 rss: 72Mb L: 47/49 MS: 1 ChangeBinInt- 00:08:35.222 [2024-06-07 22:59:27.439294] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:35.222 [2024-06-07 22:59:27.439329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:35.222 [2024-06-07 22:59:27.439388] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:35.222 [2024-06-07 22:59:27.439407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:35.222 [2024-06-07 22:59:27.439469] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:35.222 [2024-06-07 22:59:27.439488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:35.222 [2024-06-07 22:59:27.439549] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:35.222 [2024-06-07 22:59:27.439569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:35.222 #40 NEW cov: 12134 ft: 14544 corp: 23/958b lim: 50 exec/s: 40 rss: 72Mb L: 47/49 MS: 1 ChangeByte- 00:08:35.222 [2024-06-07 22:59:27.479400] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:35.222 [2024-06-07 22:59:27.479435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:35.222 [2024-06-07 22:59:27.479484] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:35.222 [2024-06-07 22:59:27.479503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:35.222 [2024-06-07 22:59:27.479565] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:35.222 [2024-06-07 22:59:27.479590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:35.222 [2024-06-07 22:59:27.479653] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:35.222 [2024-06-07 22:59:27.479673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:35.482 #41 NEW cov: 12134 ft: 14550 corp: 24/1006b lim: 50 exec/s: 41 rss: 73Mb L: 48/49 MS: 1 InsertByte- 00:08:35.482 [2024-06-07 22:59:27.529544] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:35.482 [2024-06-07 22:59:27.529589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:35.482 [2024-06-07 22:59:27.529634] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:35.482 [2024-06-07 22:59:27.529653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:35.482 [2024-06-07 22:59:27.529716] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:35.482 [2024-06-07 22:59:27.529736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:35.482 [2024-06-07 22:59:27.529799] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:35.482 [2024-06-07 22:59:27.529818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:35.482 #42 NEW cov: 12134 ft: 14567 corp: 25/1054b lim: 50 exec/s: 42 rss: 73Mb L: 48/49 MS: 1 CopyPart- 00:08:35.482 [2024-06-07 22:59:27.599717] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:35.482 [2024-06-07 22:59:27.599751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:35.482 [2024-06-07 22:59:27.599810] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:35.482 [2024-06-07 22:59:27.599830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:35.482 [2024-06-07 22:59:27.599893] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:35.482 [2024-06-07 22:59:27.599911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:35.482 [2024-06-07 22:59:27.599977] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:35.482 [2024-06-07 22:59:27.599996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:35.482 #43 NEW cov: 12134 ft: 14581 corp: 26/1101b lim: 50 exec/s: 43 rss: 73Mb L: 47/49 MS: 1 ChangeByte- 00:08:35.482 [2024-06-07 22:59:27.649902] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:35.482 [2024-06-07 22:59:27.649937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:35.482 [2024-06-07 22:59:27.649992] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:35.482 [2024-06-07 22:59:27.650011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:35.482 [2024-06-07 22:59:27.650073] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:35.482 [2024-06-07 22:59:27.650092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:35.482 [2024-06-07 22:59:27.650157] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:35.482 [2024-06-07 22:59:27.650177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:35.482 #44 NEW cov: 12134 ft: 14597 corp: 27/1149b lim: 50 exec/s: 44 rss: 73Mb L: 48/49 MS: 1 ChangeBit- 00:08:35.482 [2024-06-07 22:59:27.720100] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:35.482 [2024-06-07 22:59:27.720136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:35.482 [2024-06-07 22:59:27.720184] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:35.482 [2024-06-07 22:59:27.720208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:35.482 [2024-06-07 22:59:27.720268] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:35.482 [2024-06-07 22:59:27.720288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:35.482 [2024-06-07 22:59:27.720352] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:35.482 [2024-06-07 22:59:27.720370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:35.482 #45 NEW cov: 12134 ft: 14617 corp: 28/1197b lim: 50 exec/s: 45 rss: 73Mb L: 48/49 MS: 1 InsertByte- 00:08:35.742 [2024-06-07 22:59:27.770008] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:35.742 [2024-06-07 22:59:27.770042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:35.742 [2024-06-07 22:59:27.770093] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:35.742 [2024-06-07 22:59:27.770112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:35.742 [2024-06-07 22:59:27.770174] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:35.742 [2024-06-07 22:59:27.770193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:35.742 #46 NEW cov: 12134 ft: 14647 corp: 29/1230b lim: 50 exec/s: 23 rss: 73Mb L: 33/49 MS: 1 CrossOver- 00:08:35.742 #46 DONE cov: 12134 ft: 14647 corp: 29/1230b lim: 50 exec/s: 23 rss: 73Mb 00:08:35.742 Done 46 runs in 2 second(s) 00:08:35.742 22:59:27 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_21.conf /var/tmp/suppress_nvmf_fuzz 00:08:35.742 22:59:27 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:35.742 22:59:27 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:35.742 22:59:27 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 22 1 0x1 00:08:35.742 22:59:27 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=22 00:08:35.742 22:59:27 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:35.742 22:59:27 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:35.742 22:59:27 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:08:35.742 22:59:27 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_22.conf 00:08:35.742 22:59:27 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:35.742 22:59:27 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:35.742 22:59:27 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 22 00:08:35.742 22:59:27 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4422 00:08:35.742 22:59:27 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:08:35.742 22:59:27 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' 00:08:35.742 22:59:27 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4422"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:35.742 22:59:27 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:35.742 22:59:27 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:35.742 22:59:27 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' -c /tmp/fuzz_json_22.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 -Z 22 00:08:35.742 [2024-06-07 22:59:28.001112] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:08:35.742 [2024-06-07 22:59:28.001186] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4159273 ] 00:08:36.002 EAL: No free 2048 kB hugepages reported on node 1 00:08:36.002 [2024-06-07 22:59:28.244853] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.261 [2024-06-07 22:59:28.324889] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.261 [2024-06-07 22:59:28.387282] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:36.261 [2024-06-07 22:59:28.403673] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4422 *** 00:08:36.261 INFO: Running with entropic power schedule (0xFF, 100). 00:08:36.261 INFO: Seed: 1427817914 00:08:36.261 INFO: Loaded 1 modules (357552 inline 8-bit counters): 357552 [0x29a2fcc, 0x29fa47c), 00:08:36.261 INFO: Loaded 1 PC tables (357552 PCs): 357552 [0x29fa480,0x2f6ef80), 00:08:36.261 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:08:36.261 INFO: A corpus is not provided, starting from an empty corpus 00:08:36.261 #2 INITED exec/s: 0 rss: 63Mb 00:08:36.261 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:36.261 This may also happen if the target rejected all inputs we tried so far 00:08:36.261 [2024-06-07 22:59:28.480052] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:36.261 [2024-06-07 22:59:28.480098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:36.261 [2024-06-07 22:59:28.480242] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:36.261 [2024-06-07 22:59:28.480274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:36.830 NEW_FUNC[1/687]: 0x4aa610 in fuzz_nvm_reservation_register_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:644 00:08:36.830 NEW_FUNC[2/687]: 0x4bf580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:36.830 #8 NEW cov: 11916 ft: 11917 corp: 2/36b lim: 85 exec/s: 0 rss: 70Mb L: 35/35 MS: 1 InsertRepeatedBytes- 00:08:36.830 [2024-06-07 22:59:28.941490] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:36.830 [2024-06-07 22:59:28.941539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:36.830 [2024-06-07 22:59:28.941677] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:36.830 [2024-06-07 22:59:28.941705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:36.830 [2024-06-07 22:59:28.941836] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:36.830 [2024-06-07 22:59:28.941862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:36.830 #19 NEW cov: 12046 ft: 12880 corp: 3/100b lim: 85 exec/s: 0 rss: 70Mb L: 64/64 MS: 1 InsertRepeatedBytes- 00:08:36.830 [2024-06-07 22:59:29.021612] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:36.830 [2024-06-07 22:59:29.021651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:36.830 [2024-06-07 22:59:29.021697] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:36.830 [2024-06-07 22:59:29.021723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:36.830 [2024-06-07 22:59:29.021850] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:36.830 [2024-06-07 22:59:29.021877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:36.830 #20 NEW cov: 12052 ft: 13229 corp: 4/164b lim: 85 exec/s: 0 rss: 70Mb L: 64/64 MS: 1 ChangeByte- 00:08:36.830 [2024-06-07 22:59:29.102100] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:36.830 [2024-06-07 22:59:29.102142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:36.830 [2024-06-07 22:59:29.102202] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:36.830 [2024-06-07 22:59:29.102229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:36.830 [2024-06-07 22:59:29.102356] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:36.830 [2024-06-07 22:59:29.102385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:36.830 [2024-06-07 22:59:29.102511] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:36.830 [2024-06-07 22:59:29.102537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:37.089 #21 NEW cov: 12137 ft: 13911 corp: 5/232b lim: 85 exec/s: 0 rss: 70Mb L: 68/68 MS: 1 InsertRepeatedBytes- 00:08:37.089 [2024-06-07 22:59:29.181734] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:37.089 [2024-06-07 22:59:29.181775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:37.089 [2024-06-07 22:59:29.181839] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:37.089 [2024-06-07 22:59:29.181871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:37.089 #22 NEW cov: 12137 ft: 14037 corp: 6/267b lim: 85 exec/s: 0 rss: 70Mb L: 35/68 MS: 1 ChangeBit- 00:08:37.089 [2024-06-07 22:59:29.242264] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:37.089 [2024-06-07 22:59:29.242303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:37.089 [2024-06-07 22:59:29.242351] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:37.089 [2024-06-07 22:59:29.242376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:37.090 [2024-06-07 22:59:29.242505] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:37.090 [2024-06-07 22:59:29.242533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:37.090 #23 NEW cov: 12137 ft: 14079 corp: 7/322b lim: 85 exec/s: 0 rss: 70Mb L: 55/68 MS: 1 InsertRepeatedBytes- 00:08:37.090 [2024-06-07 22:59:29.302503] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:37.090 [2024-06-07 22:59:29.302547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:37.090 [2024-06-07 22:59:29.302637] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:37.090 [2024-06-07 22:59:29.302663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:37.090 [2024-06-07 22:59:29.302795] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:37.090 [2024-06-07 22:59:29.302821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:37.090 NEW_FUNC[1/1]: 0x1a74020 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:601 00:08:37.090 #24 NEW cov: 12160 ft: 14156 corp: 8/386b lim: 85 exec/s: 0 rss: 70Mb L: 64/68 MS: 1 ShuffleBytes- 00:08:37.090 [2024-06-07 22:59:29.362683] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:37.090 [2024-06-07 22:59:29.362724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:37.090 [2024-06-07 22:59:29.362783] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:37.090 [2024-06-07 22:59:29.362809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:37.090 [2024-06-07 22:59:29.362940] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:37.090 [2024-06-07 22:59:29.362969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:37.349 #25 NEW cov: 12160 ft: 14264 corp: 9/450b lim: 85 exec/s: 0 rss: 70Mb L: 64/68 MS: 1 CopyPart- 00:08:37.349 [2024-06-07 22:59:29.423138] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:37.349 [2024-06-07 22:59:29.423177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:37.349 [2024-06-07 22:59:29.423236] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:37.349 [2024-06-07 22:59:29.423263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:37.349 [2024-06-07 22:59:29.423389] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:37.349 [2024-06-07 22:59:29.423412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:37.349 [2024-06-07 22:59:29.423538] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:37.349 [2024-06-07 22:59:29.423565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:37.349 #26 NEW cov: 12160 ft: 14286 corp: 10/518b lim: 85 exec/s: 26 rss: 71Mb L: 68/68 MS: 1 ChangeByte- 00:08:37.349 [2024-06-07 22:59:29.503009] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:37.349 [2024-06-07 22:59:29.503050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:37.349 [2024-06-07 22:59:29.503111] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:37.349 [2024-06-07 22:59:29.503139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:37.349 [2024-06-07 22:59:29.503272] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:37.349 [2024-06-07 22:59:29.503303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:37.349 #27 NEW cov: 12160 ft: 14356 corp: 11/582b lim: 85 exec/s: 27 rss: 71Mb L: 64/68 MS: 1 ShuffleBytes- 00:08:37.349 [2024-06-07 22:59:29.582962] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:37.349 [2024-06-07 22:59:29.583004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:37.349 [2024-06-07 22:59:29.583085] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:37.349 [2024-06-07 22:59:29.583116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:37.349 #28 NEW cov: 12160 ft: 14392 corp: 12/617b lim: 85 exec/s: 28 rss: 71Mb L: 35/68 MS: 1 ChangeBinInt- 00:08:37.608 [2024-06-07 22:59:29.642944] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:37.608 [2024-06-07 22:59:29.642986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:37.608 #29 NEW cov: 12160 ft: 15288 corp: 13/642b lim: 85 exec/s: 29 rss: 71Mb L: 25/68 MS: 1 EraseBytes- 00:08:37.608 [2024-06-07 22:59:29.723500] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:37.608 [2024-06-07 22:59:29.723548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:37.608 [2024-06-07 22:59:29.723650] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:37.608 [2024-06-07 22:59:29.723676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:37.608 #30 NEW cov: 12160 ft: 15308 corp: 14/677b lim: 85 exec/s: 30 rss: 71Mb L: 35/68 MS: 1 ChangeBinInt- 00:08:37.608 [2024-06-07 22:59:29.783400] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:37.608 [2024-06-07 22:59:29.783442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:37.608 #31 NEW cov: 12160 ft: 15363 corp: 15/696b lim: 85 exec/s: 31 rss: 71Mb L: 19/68 MS: 1 EraseBytes- 00:08:37.608 [2024-06-07 22:59:29.864196] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:37.608 [2024-06-07 22:59:29.864238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:37.608 [2024-06-07 22:59:29.864283] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:37.608 [2024-06-07 22:59:29.864307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:37.608 [2024-06-07 22:59:29.864437] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:37.608 [2024-06-07 22:59:29.864466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:37.867 #32 NEW cov: 12160 ft: 15413 corp: 16/751b lim: 85 exec/s: 32 rss: 71Mb L: 55/68 MS: 1 ShuffleBytes- 00:08:37.867 [2024-06-07 22:59:29.944791] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:37.867 [2024-06-07 22:59:29.944831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:37.867 [2024-06-07 22:59:29.944889] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:37.867 [2024-06-07 22:59:29.944920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:37.867 [2024-06-07 22:59:29.945049] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:37.867 [2024-06-07 22:59:29.945079] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:37.868 [2024-06-07 22:59:29.945208] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:37.868 [2024-06-07 22:59:29.945231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:37.868 #33 NEW cov: 12160 ft: 15427 corp: 17/819b lim: 85 exec/s: 33 rss: 71Mb L: 68/68 MS: 1 ChangeBit- 00:08:37.868 [2024-06-07 22:59:30.004711] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:37.868 [2024-06-07 22:59:30.004755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:37.868 [2024-06-07 22:59:30.004848] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:37.868 [2024-06-07 22:59:30.004881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:37.868 [2024-06-07 22:59:30.005011] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:37.868 [2024-06-07 22:59:30.005041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:37.868 #34 NEW cov: 12160 ft: 15465 corp: 18/883b lim: 85 exec/s: 34 rss: 71Mb L: 64/68 MS: 1 ChangeByte- 00:08:37.868 [2024-06-07 22:59:30.064623] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:37.868 [2024-06-07 22:59:30.064666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:37.868 [2024-06-07 22:59:30.064780] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:37.868 [2024-06-07 22:59:30.064806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:37.868 #35 NEW cov: 12160 ft: 15506 corp: 19/918b lim: 85 exec/s: 35 rss: 71Mb L: 35/68 MS: 1 ChangeByte- 00:08:38.127 [2024-06-07 22:59:30.145375] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:38.127 [2024-06-07 22:59:30.145419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:38.127 [2024-06-07 22:59:30.145469] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:38.127 [2024-06-07 22:59:30.145494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:38.127 [2024-06-07 22:59:30.145621] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:38.127 [2024-06-07 22:59:30.145652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:38.127 [2024-06-07 22:59:30.145781] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:38.127 [2024-06-07 22:59:30.145805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:38.127 #36 NEW cov: 12160 ft: 15530 corp: 20/986b lim: 85 exec/s: 36 rss: 71Mb L: 68/68 MS: 1 CMP- DE: "\377\015>eiSYt"- 00:08:38.127 [2024-06-07 22:59:30.225338] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:38.127 [2024-06-07 22:59:30.225381] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:38.127 [2024-06-07 22:59:30.225466] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:38.127 [2024-06-07 22:59:30.225496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:38.127 [2024-06-07 22:59:30.225629] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:38.127 [2024-06-07 22:59:30.225660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:38.127 #37 NEW cov: 12160 ft: 15566 corp: 21/1041b lim: 85 exec/s: 37 rss: 71Mb L: 55/68 MS: 1 PersAutoDict- DE: "\377\015>eiSYt"- 00:08:38.127 [2024-06-07 22:59:30.285259] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:38.127 [2024-06-07 22:59:30.285302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:38.127 [2024-06-07 22:59:30.285394] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:38.127 [2024-06-07 22:59:30.285417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:38.127 #38 NEW cov: 12160 ft: 15580 corp: 22/1076b lim: 85 exec/s: 38 rss: 71Mb L: 35/68 MS: 1 ChangeBinInt- 00:08:38.127 [2024-06-07 22:59:30.345637] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:38.127 [2024-06-07 22:59:30.345678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:38.127 [2024-06-07 22:59:30.345718] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:38.127 [2024-06-07 22:59:30.345745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:38.127 [2024-06-07 22:59:30.345863] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:38.127 [2024-06-07 22:59:30.345890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:38.127 #39 NEW cov: 12160 ft: 15654 corp: 23/1140b lim: 85 exec/s: 39 rss: 71Mb L: 64/68 MS: 1 ChangeByte- 00:08:38.386 [2024-06-07 22:59:30.405612] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:38.386 [2024-06-07 22:59:30.405652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:38.386 [2024-06-07 22:59:30.405694] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:38.386 [2024-06-07 22:59:30.405721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:38.386 #40 NEW cov: 12160 ft: 15672 corp: 24/1175b lim: 85 exec/s: 20 rss: 72Mb L: 35/68 MS: 1 ChangeBit- 00:08:38.386 #40 DONE cov: 12160 ft: 15672 corp: 24/1175b lim: 85 exec/s: 20 rss: 72Mb 00:08:38.386 ###### Recommended dictionary. ###### 00:08:38.386 "\377\015>eiSYt" # Uses: 1 00:08:38.386 ###### End of recommended dictionary. ###### 00:08:38.386 Done 40 runs in 2 second(s) 00:08:38.386 22:59:30 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_22.conf /var/tmp/suppress_nvmf_fuzz 00:08:38.386 22:59:30 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:38.386 22:59:30 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:38.386 22:59:30 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 23 1 0x1 00:08:38.386 22:59:30 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=23 00:08:38.386 22:59:30 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:38.386 22:59:30 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:38.386 22:59:30 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:08:38.386 22:59:30 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_23.conf 00:08:38.386 22:59:30 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:38.386 22:59:30 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:38.386 22:59:30 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 23 00:08:38.386 22:59:30 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4423 00:08:38.386 22:59:30 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:08:38.387 22:59:30 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' 00:08:38.387 22:59:30 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4423"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:38.387 22:59:30 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:38.387 22:59:30 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:38.387 22:59:30 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' -c /tmp/fuzz_json_23.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 -Z 23 00:08:38.387 [2024-06-07 22:59:30.639016] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:08:38.387 [2024-06-07 22:59:30.639086] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4159802 ] 00:08:38.645 EAL: No free 2048 kB hugepages reported on node 1 00:08:38.645 [2024-06-07 22:59:30.872534] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.904 [2024-06-07 22:59:30.951044] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.904 [2024-06-07 22:59:31.013384] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:38.904 [2024-06-07 22:59:31.029718] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4423 *** 00:08:38.904 INFO: Running with entropic power schedule (0xFF, 100). 00:08:38.904 INFO: Seed: 4054816213 00:08:38.904 INFO: Loaded 1 modules (357552 inline 8-bit counters): 357552 [0x29a2fcc, 0x29fa47c), 00:08:38.904 INFO: Loaded 1 PC tables (357552 PCs): 357552 [0x29fa480,0x2f6ef80), 00:08:38.904 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:08:38.904 INFO: A corpus is not provided, starting from an empty corpus 00:08:38.904 #2 INITED exec/s: 0 rss: 64Mb 00:08:38.904 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:38.904 This may also happen if the target rejected all inputs we tried so far 00:08:38.904 [2024-06-07 22:59:31.075271] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:38.905 [2024-06-07 22:59:31.075302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:38.905 [2024-06-07 22:59:31.075360] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:38.905 [2024-06-07 22:59:31.075373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:38.905 [2024-06-07 22:59:31.075422] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:38.905 [2024-06-07 22:59:31.075437] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:38.905 [2024-06-07 22:59:31.075490] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:38.905 [2024-06-07 22:59:31.075506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:39.474 NEW_FUNC[1/686]: 0x4ad840 in fuzz_nvm_reservation_report_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:671 00:08:39.474 NEW_FUNC[2/686]: 0x4bf580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:39.474 #9 NEW cov: 11849 ft: 11850 corp: 2/25b lim: 25 exec/s: 0 rss: 71Mb L: 24/24 MS: 2 CopyPart-InsertRepeatedBytes- 00:08:39.474 [2024-06-07 22:59:31.506218] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:39.474 [2024-06-07 22:59:31.506258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:39.474 [2024-06-07 22:59:31.506298] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:39.474 [2024-06-07 22:59:31.506319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:39.474 [2024-06-07 22:59:31.506377] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:39.474 [2024-06-07 22:59:31.506396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:39.474 #10 NEW cov: 11979 ft: 12915 corp: 3/44b lim: 25 exec/s: 0 rss: 71Mb L: 19/24 MS: 1 CrossOver- 00:08:39.474 [2024-06-07 22:59:31.556556] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:39.474 [2024-06-07 22:59:31.556591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:39.474 [2024-06-07 22:59:31.556664] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:39.474 [2024-06-07 22:59:31.556677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:39.474 [2024-06-07 22:59:31.556728] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:39.474 [2024-06-07 22:59:31.556741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:39.474 [2024-06-07 22:59:31.556795] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:39.474 [2024-06-07 22:59:31.556810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:39.474 [2024-06-07 22:59:31.556864] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:08:39.474 [2024-06-07 22:59:31.556880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:39.474 #11 NEW cov: 11985 ft: 13181 corp: 4/69b lim: 25 exec/s: 0 rss: 71Mb L: 25/25 MS: 1 InsertByte- 00:08:39.474 [2024-06-07 22:59:31.596512] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:39.474 [2024-06-07 22:59:31.596541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:39.474 [2024-06-07 22:59:31.596615] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:39.474 [2024-06-07 22:59:31.596629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:39.474 [2024-06-07 22:59:31.596685] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:39.474 [2024-06-07 22:59:31.596700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:39.474 [2024-06-07 22:59:31.596757] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:39.474 [2024-06-07 22:59:31.596773] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:39.474 #12 NEW cov: 12070 ft: 13586 corp: 5/93b lim: 25 exec/s: 0 rss: 71Mb L: 24/25 MS: 1 ChangeBit- 00:08:39.474 [2024-06-07 22:59:31.636779] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:39.474 [2024-06-07 22:59:31.636807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:39.474 [2024-06-07 22:59:31.636879] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:39.474 [2024-06-07 22:59:31.636892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:39.474 [2024-06-07 22:59:31.636941] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:39.474 [2024-06-07 22:59:31.636959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:39.474 [2024-06-07 22:59:31.637013] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:39.474 [2024-06-07 22:59:31.637029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:39.474 [2024-06-07 22:59:31.637086] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:08:39.474 [2024-06-07 22:59:31.637101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:39.474 #13 NEW cov: 12070 ft: 13701 corp: 6/118b lim: 25 exec/s: 0 rss: 71Mb L: 25/25 MS: 1 CrossOver- 00:08:39.474 [2024-06-07 22:59:31.676614] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:39.474 [2024-06-07 22:59:31.676641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:39.474 [2024-06-07 22:59:31.676700] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:39.474 [2024-06-07 22:59:31.676713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:39.474 [2024-06-07 22:59:31.676767] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:39.474 [2024-06-07 22:59:31.676784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:39.474 #14 NEW cov: 12070 ft: 13792 corp: 7/137b lim: 25 exec/s: 0 rss: 71Mb L: 19/25 MS: 1 ChangeByte- 00:08:39.474 [2024-06-07 22:59:31.726883] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:39.474 [2024-06-07 22:59:31.726911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:39.474 [2024-06-07 22:59:31.726983] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:39.474 [2024-06-07 22:59:31.726996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:39.474 [2024-06-07 22:59:31.727052] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:39.475 [2024-06-07 22:59:31.727067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:39.475 [2024-06-07 22:59:31.727126] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:39.475 [2024-06-07 22:59:31.727141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:39.734 #15 NEW cov: 12070 ft: 13922 corp: 8/158b lim: 25 exec/s: 0 rss: 71Mb L: 21/25 MS: 1 CMP- DE: "\000\000"- 00:08:39.734 [2024-06-07 22:59:31.777023] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:39.734 [2024-06-07 22:59:31.777050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:39.734 [2024-06-07 22:59:31.777119] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:39.734 [2024-06-07 22:59:31.777133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:39.734 [2024-06-07 22:59:31.777176] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:39.734 [2024-06-07 22:59:31.777192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:39.734 [2024-06-07 22:59:31.777248] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:39.734 [2024-06-07 22:59:31.777267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:39.734 #16 NEW cov: 12070 ft: 13995 corp: 9/182b lim: 25 exec/s: 0 rss: 71Mb L: 24/25 MS: 1 PersAutoDict- DE: "\000\000"- 00:08:39.734 [2024-06-07 22:59:31.817038] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:39.734 [2024-06-07 22:59:31.817067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:39.734 [2024-06-07 22:59:31.817140] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:39.734 [2024-06-07 22:59:31.817153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:39.734 [2024-06-07 22:59:31.817207] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:39.734 [2024-06-07 22:59:31.817223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:39.734 #17 NEW cov: 12070 ft: 14020 corp: 10/201b lim: 25 exec/s: 0 rss: 71Mb L: 19/25 MS: 1 ShuffleBytes- 00:08:39.734 [2024-06-07 22:59:31.857094] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:39.735 [2024-06-07 22:59:31.857122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:39.735 [2024-06-07 22:59:31.857200] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:39.735 [2024-06-07 22:59:31.857213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:39.735 [2024-06-07 22:59:31.857267] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:39.735 [2024-06-07 22:59:31.857283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:39.735 #18 NEW cov: 12070 ft: 14064 corp: 11/220b lim: 25 exec/s: 0 rss: 72Mb L: 19/25 MS: 1 CrossOver- 00:08:39.735 [2024-06-07 22:59:31.907395] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:39.735 [2024-06-07 22:59:31.907423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:39.735 [2024-06-07 22:59:31.907494] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:39.735 [2024-06-07 22:59:31.907508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:39.735 [2024-06-07 22:59:31.907562] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:39.735 [2024-06-07 22:59:31.907583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:39.735 [2024-06-07 22:59:31.907639] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:39.735 [2024-06-07 22:59:31.907655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:39.735 #19 NEW cov: 12070 ft: 14148 corp: 12/241b lim: 25 exec/s: 0 rss: 72Mb L: 21/25 MS: 1 ChangeByte- 00:08:39.735 [2024-06-07 22:59:31.957397] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:39.735 [2024-06-07 22:59:31.957425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:39.735 [2024-06-07 22:59:31.957480] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:39.735 [2024-06-07 22:59:31.957493] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:39.735 [2024-06-07 22:59:31.957550] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:39.735 [2024-06-07 22:59:31.957566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:39.735 NEW_FUNC[1/1]: 0x1a74020 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:601 00:08:39.735 #20 NEW cov: 12093 ft: 14269 corp: 13/258b lim: 25 exec/s: 0 rss: 72Mb L: 17/25 MS: 1 InsertRepeatedBytes- 00:08:39.735 [2024-06-07 22:59:31.997585] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:39.735 [2024-06-07 22:59:31.997612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:39.735 [2024-06-07 22:59:31.997667] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:39.735 [2024-06-07 22:59:31.997677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:39.735 [2024-06-07 22:59:31.997734] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:39.735 [2024-06-07 22:59:31.997751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:39.994 #21 NEW cov: 12093 ft: 14300 corp: 14/277b lim: 25 exec/s: 0 rss: 72Mb L: 19/25 MS: 1 ShuffleBytes- 00:08:39.994 [2024-06-07 22:59:32.047874] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:39.994 [2024-06-07 22:59:32.047901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:39.994 [2024-06-07 22:59:32.047952] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:39.994 [2024-06-07 22:59:32.047967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:39.994 [2024-06-07 22:59:32.047986] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:39.994 [2024-06-07 22:59:32.047999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:39.994 [2024-06-07 22:59:32.048052] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:39.994 [2024-06-07 22:59:32.048068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:39.994 [2024-06-07 22:59:32.048123] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:08:39.994 [2024-06-07 22:59:32.048139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:39.994 #22 NEW cov: 12093 ft: 14313 corp: 15/302b lim: 25 exec/s: 22 rss: 72Mb L: 25/25 MS: 1 ChangeBit- 00:08:39.994 [2024-06-07 22:59:32.097551] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:39.994 [2024-06-07 22:59:32.097582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:39.994 #23 NEW cov: 12093 ft: 14741 corp: 16/310b lim: 25 exec/s: 23 rss: 72Mb L: 8/25 MS: 1 CrossOver- 00:08:39.995 [2024-06-07 22:59:32.138046] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:39.995 [2024-06-07 22:59:32.138074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:39.995 [2024-06-07 22:59:32.138127] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:39.995 [2024-06-07 22:59:32.138142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:39.995 [2024-06-07 22:59:32.138194] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:39.995 [2024-06-07 22:59:32.138210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:39.995 [2024-06-07 22:59:32.138263] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:39.995 [2024-06-07 22:59:32.138278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:39.995 #24 NEW cov: 12093 ft: 14765 corp: 17/334b lim: 25 exec/s: 24 rss: 72Mb L: 24/25 MS: 1 CrossOver- 00:08:39.995 [2024-06-07 22:59:32.188190] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:39.995 [2024-06-07 22:59:32.188218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:39.995 [2024-06-07 22:59:32.188268] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:39.995 [2024-06-07 22:59:32.188284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:39.995 [2024-06-07 22:59:32.188305] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:39.995 [2024-06-07 22:59:32.188320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:39.995 [2024-06-07 22:59:32.188374] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:39.995 [2024-06-07 22:59:32.188389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:39.995 #25 NEW cov: 12093 ft: 14788 corp: 18/357b lim: 25 exec/s: 25 rss: 72Mb L: 23/25 MS: 1 InsertRepeatedBytes- 00:08:39.995 [2024-06-07 22:59:32.228275] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:39.995 [2024-06-07 22:59:32.228303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:39.995 [2024-06-07 22:59:32.228357] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:39.995 [2024-06-07 22:59:32.228372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:39.995 [2024-06-07 22:59:32.228404] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:39.995 [2024-06-07 22:59:32.228419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:39.995 [2024-06-07 22:59:32.228473] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:39.995 [2024-06-07 22:59:32.228488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:39.995 #27 NEW cov: 12093 ft: 14823 corp: 19/378b lim: 25 exec/s: 27 rss: 72Mb L: 21/25 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:08:39.995 [2024-06-07 22:59:32.268000] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:39.995 [2024-06-07 22:59:32.268028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:40.255 #28 NEW cov: 12093 ft: 14827 corp: 20/386b lim: 25 exec/s: 28 rss: 72Mb L: 8/25 MS: 1 CMP- DE: "\000\000\177H\004%\331i"- 00:08:40.255 [2024-06-07 22:59:32.318172] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:40.255 [2024-06-07 22:59:32.318200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:40.255 #29 NEW cov: 12093 ft: 14834 corp: 21/394b lim: 25 exec/s: 29 rss: 72Mb L: 8/25 MS: 1 CopyPart- 00:08:40.255 [2024-06-07 22:59:32.358650] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:40.255 [2024-06-07 22:59:32.358678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:40.255 [2024-06-07 22:59:32.358729] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:40.255 [2024-06-07 22:59:32.358744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:40.255 [2024-06-07 22:59:32.358768] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:40.255 [2024-06-07 22:59:32.358782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:40.255 [2024-06-07 22:59:32.358836] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:40.255 [2024-06-07 22:59:32.358851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:40.255 #30 NEW cov: 12093 ft: 14842 corp: 22/418b lim: 25 exec/s: 30 rss: 72Mb L: 24/25 MS: 1 InsertRepeatedBytes- 00:08:40.255 [2024-06-07 22:59:32.398801] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:40.255 [2024-06-07 22:59:32.398829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:40.255 [2024-06-07 22:59:32.398879] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:40.255 [2024-06-07 22:59:32.398894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:40.255 [2024-06-07 22:59:32.398913] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:40.255 [2024-06-07 22:59:32.398928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:40.255 [2024-06-07 22:59:32.398983] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:40.255 [2024-06-07 22:59:32.398998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:40.255 #31 NEW cov: 12093 ft: 14851 corp: 23/441b lim: 25 exec/s: 31 rss: 72Mb L: 23/25 MS: 1 InsertRepeatedBytes- 00:08:40.255 [2024-06-07 22:59:32.448589] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:40.255 [2024-06-07 22:59:32.448617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:40.255 #32 NEW cov: 12093 ft: 14893 corp: 24/449b lim: 25 exec/s: 32 rss: 73Mb L: 8/25 MS: 1 ChangeBit- 00:08:40.255 [2024-06-07 22:59:32.498724] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:40.255 [2024-06-07 22:59:32.498752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:40.255 #33 NEW cov: 12093 ft: 14897 corp: 25/457b lim: 25 exec/s: 33 rss: 73Mb L: 8/25 MS: 1 PersAutoDict- DE: "\000\000"- 00:08:40.514 [2024-06-07 22:59:32.539208] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:40.514 [2024-06-07 22:59:32.539235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:40.514 [2024-06-07 22:59:32.539289] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:40.514 [2024-06-07 22:59:32.539304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:40.514 [2024-06-07 22:59:32.539338] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:40.514 [2024-06-07 22:59:32.539357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:40.514 [2024-06-07 22:59:32.539411] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:40.514 [2024-06-07 22:59:32.539427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:40.514 #34 NEW cov: 12093 ft: 14914 corp: 26/481b lim: 25 exec/s: 34 rss: 73Mb L: 24/25 MS: 1 CopyPart- 00:08:40.514 [2024-06-07 22:59:32.578917] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:40.514 [2024-06-07 22:59:32.578944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:40.514 #35 NEW cov: 12093 ft: 14954 corp: 27/490b lim: 25 exec/s: 35 rss: 73Mb L: 9/25 MS: 1 PersAutoDict- DE: "\000\000\177H\004%\331i"- 00:08:40.514 [2024-06-07 22:59:32.619799] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:40.515 [2024-06-07 22:59:32.619827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:40.515 [2024-06-07 22:59:32.619876] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:40.515 [2024-06-07 22:59:32.619891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:40.515 [2024-06-07 22:59:32.619911] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:40.515 [2024-06-07 22:59:32.619927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:40.515 [2024-06-07 22:59:32.619980] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:40.515 [2024-06-07 22:59:32.619995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:40.515 #36 NEW cov: 12093 ft: 15056 corp: 28/514b lim: 25 exec/s: 36 rss: 73Mb L: 24/25 MS: 1 CMP- DE: "\001\000\001X"- 00:08:40.515 [2024-06-07 22:59:32.669189] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:40.515 [2024-06-07 22:59:32.669217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:40.515 #37 NEW cov: 12093 ft: 15064 corp: 29/522b lim: 25 exec/s: 37 rss: 73Mb L: 8/25 MS: 1 CMP- DE: "\377\377~H\004\020\334\377"- 00:08:40.515 [2024-06-07 22:59:32.719707] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:40.515 [2024-06-07 22:59:32.719736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:40.515 [2024-06-07 22:59:32.719804] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:40.515 [2024-06-07 22:59:32.719818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:40.515 [2024-06-07 22:59:32.719862] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:40.515 [2024-06-07 22:59:32.719878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:40.515 [2024-06-07 22:59:32.719935] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:40.515 [2024-06-07 22:59:32.719950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:40.515 #38 NEW cov: 12093 ft: 15095 corp: 30/545b lim: 25 exec/s: 38 rss: 73Mb L: 23/25 MS: 1 CopyPart- 00:08:40.515 [2024-06-07 22:59:32.759968] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:40.515 [2024-06-07 22:59:32.759999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:40.515 [2024-06-07 22:59:32.760052] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:40.515 [2024-06-07 22:59:32.760067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:40.515 [2024-06-07 22:59:32.760093] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:40.515 [2024-06-07 22:59:32.760108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:40.515 [2024-06-07 22:59:32.760160] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:40.515 [2024-06-07 22:59:32.760175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:40.515 [2024-06-07 22:59:32.760230] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:08:40.515 [2024-06-07 22:59:32.760246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:40.515 #39 NEW cov: 12093 ft: 15098 corp: 31/570b lim: 25 exec/s: 39 rss: 73Mb L: 25/25 MS: 1 ChangeBit- 00:08:40.775 [2024-06-07 22:59:32.800057] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:40.775 [2024-06-07 22:59:32.800084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:40.775 [2024-06-07 22:59:32.800135] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:40.775 [2024-06-07 22:59:32.800150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:40.775 [2024-06-07 22:59:32.800169] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:40.775 [2024-06-07 22:59:32.800183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:40.775 [2024-06-07 22:59:32.800236] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:40.775 [2024-06-07 22:59:32.800251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:40.775 [2024-06-07 22:59:32.800305] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:08:40.775 [2024-06-07 22:59:32.800321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:40.775 #40 NEW cov: 12093 ft: 15108 corp: 32/595b lim: 25 exec/s: 40 rss: 73Mb L: 25/25 MS: 1 ChangeBit- 00:08:40.775 [2024-06-07 22:59:32.849983] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:40.775 [2024-06-07 22:59:32.850011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:40.775 [2024-06-07 22:59:32.850067] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:40.775 [2024-06-07 22:59:32.850080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:40.775 [2024-06-07 22:59:32.850131] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:40.775 [2024-06-07 22:59:32.850147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:40.775 #46 NEW cov: 12093 ft: 15153 corp: 33/614b lim: 25 exec/s: 46 rss: 73Mb L: 19/25 MS: 1 ChangeBit- 00:08:40.775 [2024-06-07 22:59:32.890179] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:40.775 [2024-06-07 22:59:32.890206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:40.775 [2024-06-07 22:59:32.890259] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:40.775 [2024-06-07 22:59:32.890274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:40.775 [2024-06-07 22:59:32.890301] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:40.775 [2024-06-07 22:59:32.890317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:40.775 [2024-06-07 22:59:32.890372] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:40.775 [2024-06-07 22:59:32.890387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:40.775 #47 NEW cov: 12093 ft: 15177 corp: 34/636b lim: 25 exec/s: 47 rss: 74Mb L: 22/25 MS: 1 InsertByte- 00:08:40.775 [2024-06-07 22:59:32.940245] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:40.775 [2024-06-07 22:59:32.940272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:40.775 [2024-06-07 22:59:32.940328] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:40.775 [2024-06-07 22:59:32.940341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:40.775 [2024-06-07 22:59:32.940395] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:40.775 [2024-06-07 22:59:32.940411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:40.775 #48 NEW cov: 12093 ft: 15193 corp: 35/655b lim: 25 exec/s: 48 rss: 74Mb L: 19/25 MS: 1 ChangeByte- 00:08:40.775 [2024-06-07 22:59:32.980239] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:40.775 [2024-06-07 22:59:32.980268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:40.775 [2024-06-07 22:59:32.980318] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:40.775 [2024-06-07 22:59:32.980333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:40.775 #49 NEW cov: 12093 ft: 15431 corp: 36/667b lim: 25 exec/s: 49 rss: 74Mb L: 12/25 MS: 1 CopyPart- 00:08:40.775 [2024-06-07 22:59:33.020500] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:40.775 [2024-06-07 22:59:33.020528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:40.775 [2024-06-07 22:59:33.020605] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:40.775 [2024-06-07 22:59:33.020619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:40.775 [2024-06-07 22:59:33.020673] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:40.775 [2024-06-07 22:59:33.020689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:40.775 #50 NEW cov: 12093 ft: 15445 corp: 37/684b lim: 25 exec/s: 50 rss: 74Mb L: 17/25 MS: 1 EraseBytes- 00:08:41.034 [2024-06-07 22:59:33.060744] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:41.034 [2024-06-07 22:59:33.060772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:41.034 [2024-06-07 22:59:33.060849] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:41.034 [2024-06-07 22:59:33.060862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:41.034 [2024-06-07 22:59:33.060916] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:41.034 [2024-06-07 22:59:33.060932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:41.034 [2024-06-07 22:59:33.060987] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:41.034 [2024-06-07 22:59:33.061003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:41.034 #51 NEW cov: 12093 ft: 15446 corp: 38/706b lim: 25 exec/s: 25 rss: 74Mb L: 22/25 MS: 1 InsertRepeatedBytes- 00:08:41.034 #51 DONE cov: 12093 ft: 15446 corp: 38/706b lim: 25 exec/s: 25 rss: 74Mb 00:08:41.034 ###### Recommended dictionary. ###### 00:08:41.034 "\000\000" # Uses: 2 00:08:41.034 "\000\000\177H\004%\331i" # Uses: 1 00:08:41.034 "\001\000\001X" # Uses: 0 00:08:41.034 "\377\377~H\004\020\334\377" # Uses: 0 00:08:41.034 ###### End of recommended dictionary. ###### 00:08:41.034 Done 51 runs in 2 second(s) 00:08:41.034 22:59:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_23.conf /var/tmp/suppress_nvmf_fuzz 00:08:41.034 22:59:33 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:41.034 22:59:33 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:41.034 22:59:33 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 24 1 0x1 00:08:41.034 22:59:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=24 00:08:41.034 22:59:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:41.034 22:59:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:41.034 22:59:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:08:41.034 22:59:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_24.conf 00:08:41.034 22:59:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:41.034 22:59:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:41.034 22:59:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 24 00:08:41.034 22:59:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4424 00:08:41.034 22:59:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:08:41.034 22:59:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' 00:08:41.034 22:59:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4424"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:41.034 22:59:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:41.034 22:59:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:41.035 22:59:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' -c /tmp/fuzz_json_24.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 -Z 24 00:08:41.035 [2024-06-07 22:59:33.270894] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:08:41.035 [2024-06-07 22:59:33.270960] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4160336 ] 00:08:41.293 EAL: No free 2048 kB hugepages reported on node 1 00:08:41.553 [2024-06-07 22:59:33.580898] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.553 [2024-06-07 22:59:33.676856] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.553 [2024-06-07 22:59:33.739202] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:41.553 [2024-06-07 22:59:33.755585] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4424 *** 00:08:41.553 INFO: Running with entropic power schedule (0xFF, 100). 00:08:41.553 INFO: Seed: 2484848374 00:08:41.553 INFO: Loaded 1 modules (357552 inline 8-bit counters): 357552 [0x29a2fcc, 0x29fa47c), 00:08:41.553 INFO: Loaded 1 PC tables (357552 PCs): 357552 [0x29fa480,0x2f6ef80), 00:08:41.553 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:08:41.553 INFO: A corpus is not provided, starting from an empty corpus 00:08:41.553 #2 INITED exec/s: 0 rss: 63Mb 00:08:41.553 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:41.553 This may also happen if the target rejected all inputs we tried so far 00:08:41.553 [2024-06-07 22:59:33.801200] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:723401728380766730 len:2571 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.553 [2024-06-07 22:59:33.801232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:41.553 [2024-06-07 22:59:33.801293] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:723401728380766730 len:2571 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.553 [2024-06-07 22:59:33.801304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:41.553 [2024-06-07 22:59:33.801360] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:723401728380766730 len:2571 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.553 [2024-06-07 22:59:33.801377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:42.072 NEW_FUNC[1/686]: 0x4ae920 in fuzz_nvm_compare_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:685 00:08:42.072 NEW_FUNC[2/686]: 0x4bf580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:42.072 #15 NEW cov: 11910 ft: 11910 corp: 2/66b lim: 100 exec/s: 0 rss: 70Mb L: 65/65 MS: 3 ShuffleBytes-CrossOver-InsertRepeatedBytes- 00:08:42.072 [2024-06-07 22:59:34.232294] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:723401728380766730 len:2571 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.072 [2024-06-07 22:59:34.232330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:42.072 [2024-06-07 22:59:34.232375] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:723401728380766730 len:2571 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.072 [2024-06-07 22:59:34.232390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:42.072 [2024-06-07 22:59:34.232448] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:723401728380766730 len:2571 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.072 [2024-06-07 22:59:34.232464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:42.072 NEW_FUNC[1/1]: 0x183c9a0 in nvme_tcp_qpair /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_tcp.c:171 00:08:42.072 #16 NEW cov: 12051 ft: 12495 corp: 3/131b lim: 100 exec/s: 0 rss: 71Mb L: 65/65 MS: 1 ChangeBinInt- 00:08:42.072 [2024-06-07 22:59:34.292363] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:732690402612218378 len:2571 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.072 [2024-06-07 22:59:34.292394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:42.072 [2024-06-07 22:59:34.292442] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:723401728380766730 len:2571 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.072 [2024-06-07 22:59:34.292461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:42.072 [2024-06-07 22:59:34.292519] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:723401728380766730 len:2571 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.072 [2024-06-07 22:59:34.292536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:42.072 #17 NEW cov: 12057 ft: 12864 corp: 4/196b lim: 100 exec/s: 0 rss: 71Mb L: 65/65 MS: 1 ChangeByte- 00:08:42.072 [2024-06-07 22:59:34.332478] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:723401728380766730 len:2571 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.072 [2024-06-07 22:59:34.332510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:42.072 [2024-06-07 22:59:34.332572] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:723401728380766730 len:2571 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.072 [2024-06-07 22:59:34.332590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:42.072 [2024-06-07 22:59:34.332647] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:723401728380766730 len:2571 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.072 [2024-06-07 22:59:34.332663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:42.331 #18 NEW cov: 12142 ft: 13083 corp: 5/274b lim: 100 exec/s: 0 rss: 71Mb L: 78/78 MS: 1 CopyPart- 00:08:42.331 [2024-06-07 22:59:34.382591] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:723401728380766730 len:2571 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.331 [2024-06-07 22:59:34.382637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:42.331 [2024-06-07 22:59:34.382705] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:723401728380766730 len:2571 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.331 [2024-06-07 22:59:34.382717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:42.331 [2024-06-07 22:59:34.382775] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:723401728380766730 len:2571 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.331 [2024-06-07 22:59:34.382790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:42.331 #19 NEW cov: 12142 ft: 13191 corp: 6/339b lim: 100 exec/s: 0 rss: 71Mb L: 65/78 MS: 1 ShuffleBytes- 00:08:42.331 [2024-06-07 22:59:34.422420] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:3689348814054044467 len:13108 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.331 [2024-06-07 22:59:34.422448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:42.331 #22 NEW cov: 12142 ft: 14085 corp: 7/361b lim: 100 exec/s: 0 rss: 71Mb L: 22/78 MS: 3 InsertByte-ShuffleBytes-InsertRepeatedBytes- 00:08:42.331 [2024-06-07 22:59:34.462903] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:723401728380766730 len:2571 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.331 [2024-06-07 22:59:34.462935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:42.332 [2024-06-07 22:59:34.463009] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:723401728380766730 len:2571 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.332 [2024-06-07 22:59:34.463021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:42.332 [2024-06-07 22:59:34.463083] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:723401728380766730 len:2571 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.332 [2024-06-07 22:59:34.463099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:42.332 #23 NEW cov: 12142 ft: 14215 corp: 8/439b lim: 100 exec/s: 0 rss: 71Mb L: 78/78 MS: 1 ChangeBinInt- 00:08:42.332 [2024-06-07 22:59:34.513190] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:723401728380766730 len:2571 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.332 [2024-06-07 22:59:34.513219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:42.332 [2024-06-07 22:59:34.513280] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:723401728380766730 len:2571 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.332 [2024-06-07 22:59:34.513293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:42.332 [2024-06-07 22:59:34.513348] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:723401728380766730 len:2571 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.332 [2024-06-07 22:59:34.513365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:42.332 [2024-06-07 22:59:34.513420] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:723401728380766730 len:2569 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.332 [2024-06-07 22:59:34.513435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:42.332 #24 NEW cov: 12142 ft: 14573 corp: 9/522b lim: 100 exec/s: 0 rss: 71Mb L: 83/83 MS: 1 CrossOver- 00:08:42.332 [2024-06-07 22:59:34.553118] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:723401728380766730 len:2571 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.332 [2024-06-07 22:59:34.553146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:42.332 [2024-06-07 22:59:34.553206] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:723401728380766730 len:2571 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.332 [2024-06-07 22:59:34.553219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:42.332 [2024-06-07 22:59:34.553277] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:736912527262878218 len:2571 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.332 [2024-06-07 22:59:34.553295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:42.332 #25 NEW cov: 12142 ft: 14592 corp: 10/601b lim: 100 exec/s: 0 rss: 72Mb L: 79/83 MS: 1 InsertByte- 00:08:42.332 [2024-06-07 22:59:34.602890] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:3689348814054044467 len:13108 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.332 [2024-06-07 22:59:34.602919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:42.591 #26 NEW cov: 12142 ft: 14644 corp: 11/623b lim: 100 exec/s: 0 rss: 72Mb L: 22/83 MS: 1 CopyPart- 00:08:42.591 [2024-06-07 22:59:34.653067] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:3689348814054044467 len:13108 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.591 [2024-06-07 22:59:34.653095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:42.591 NEW_FUNC[1/1]: 0x1a74020 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:601 00:08:42.591 #27 NEW cov: 12165 ft: 14668 corp: 12/658b lim: 100 exec/s: 0 rss: 72Mb L: 35/83 MS: 1 CopyPart- 00:08:42.591 [2024-06-07 22:59:34.703363] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:3689348814054044467 len:13108 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.591 [2024-06-07 22:59:34.703392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:42.591 [2024-06-07 22:59:34.703448] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:17868022687704414199 len:63480 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.591 [2024-06-07 22:59:34.703464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:42.591 #28 NEW cov: 12165 ft: 15009 corp: 13/704b lim: 100 exec/s: 0 rss: 72Mb L: 46/83 MS: 1 InsertRepeatedBytes- 00:08:42.591 [2024-06-07 22:59:34.743621] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:732690402612218378 len:2571 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.591 [2024-06-07 22:59:34.743649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:42.591 [2024-06-07 22:59:34.743707] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:723401728380766730 len:2571 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.591 [2024-06-07 22:59:34.743720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:42.591 [2024-06-07 22:59:34.743762] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:723401728380766730 len:2571 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.591 [2024-06-07 22:59:34.743778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:42.591 #29 NEW cov: 12165 ft: 15036 corp: 14/781b lim: 100 exec/s: 0 rss: 72Mb L: 77/83 MS: 1 CrossOver- 00:08:42.591 [2024-06-07 22:59:34.793572] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:3689348814054044467 len:13108 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.591 [2024-06-07 22:59:34.793608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:42.591 [2024-06-07 22:59:34.793668] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:17868022687704414199 len:63480 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.591 [2024-06-07 22:59:34.793680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:42.591 #30 NEW cov: 12165 ft: 15056 corp: 15/825b lim: 100 exec/s: 30 rss: 72Mb L: 44/83 MS: 1 EraseBytes- 00:08:42.592 [2024-06-07 22:59:34.843586] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:3689348814054044467 len:13108 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.592 [2024-06-07 22:59:34.843613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:42.851 #31 NEW cov: 12165 ft: 15098 corp: 16/860b lim: 100 exec/s: 31 rss: 72Mb L: 35/83 MS: 1 ChangeBit- 00:08:42.851 [2024-06-07 22:59:34.893881] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:3689348814054044467 len:13108 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.851 [2024-06-07 22:59:34.893910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:42.851 [2024-06-07 22:59:34.893968] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:17868022687704414199 len:63480 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.851 [2024-06-07 22:59:34.893979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:42.851 #32 NEW cov: 12165 ft: 15125 corp: 17/906b lim: 100 exec/s: 32 rss: 72Mb L: 46/83 MS: 1 ChangeByte- 00:08:42.851 [2024-06-07 22:59:34.934190] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:732690402612218378 len:2571 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.851 [2024-06-07 22:59:34.934219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:42.851 [2024-06-07 22:59:34.934281] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:723401728380766730 len:2571 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.851 [2024-06-07 22:59:34.934293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:42.851 [2024-06-07 22:59:34.934352] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:723401728380766730 len:2571 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.851 [2024-06-07 22:59:34.934368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:42.851 #33 NEW cov: 12165 ft: 15141 corp: 18/983b lim: 100 exec/s: 33 rss: 72Mb L: 77/83 MS: 1 ShuffleBytes- 00:08:42.851 [2024-06-07 22:59:34.984505] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:3689348814054044467 len:13108 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.851 [2024-06-07 22:59:34.984533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:42.851 [2024-06-07 22:59:34.984586] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:723401728380766730 len:2571 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.851 [2024-06-07 22:59:34.984602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:42.851 [2024-06-07 22:59:34.984622] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:723401728380766730 len:2571 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.851 [2024-06-07 22:59:34.984637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:42.851 [2024-06-07 22:59:34.984695] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:723401728380766730 len:2571 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.851 [2024-06-07 22:59:34.984709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:42.851 #34 NEW cov: 12165 ft: 15149 corp: 19/1080b lim: 100 exec/s: 34 rss: 72Mb L: 97/97 MS: 1 CrossOver- 00:08:42.851 [2024-06-07 22:59:35.034477] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:723401728380766730 len:2571 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.851 [2024-06-07 22:59:35.034505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:42.851 [2024-06-07 22:59:35.034566] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:723401728380766730 len:2571 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.851 [2024-06-07 22:59:35.034585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:42.851 [2024-06-07 22:59:35.034637] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:723401728380766730 len:2571 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.851 [2024-06-07 22:59:35.034652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:42.851 #35 NEW cov: 12165 ft: 15164 corp: 20/1146b lim: 100 exec/s: 35 rss: 72Mb L: 66/97 MS: 1 InsertByte- 00:08:42.851 [2024-06-07 22:59:35.074573] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:723401728380766730 len:2571 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.851 [2024-06-07 22:59:35.074607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:42.851 [2024-06-07 22:59:35.074668] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:723401730041711114 len:2571 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.851 [2024-06-07 22:59:35.074681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:42.851 [2024-06-07 22:59:35.074722] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:723401728380766730 len:2571 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.851 [2024-06-07 22:59:35.074739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:42.851 #36 NEW cov: 12165 ft: 15176 corp: 21/1224b lim: 100 exec/s: 36 rss: 72Mb L: 78/97 MS: 1 ChangeByte- 00:08:42.851 [2024-06-07 22:59:35.114913] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:723401728380766730 len:2571 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.851 [2024-06-07 22:59:35.114940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:42.851 [2024-06-07 22:59:35.114995] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:723401728380766730 len:2571 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.851 [2024-06-07 22:59:35.115010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:42.851 [2024-06-07 22:59:35.115042] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:723401728380766730 len:2571 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.851 [2024-06-07 22:59:35.115058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:42.851 [2024-06-07 22:59:35.115116] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:723401728380766730 len:2569 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.851 [2024-06-07 22:59:35.115132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:43.111 #37 NEW cov: 12165 ft: 15180 corp: 22/1322b lim: 100 exec/s: 37 rss: 73Mb L: 98/98 MS: 1 CopyPart- 00:08:43.111 [2024-06-07 22:59:35.164557] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:3689348814054044467 len:13108 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.111 [2024-06-07 22:59:35.164589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:43.111 #38 NEW cov: 12165 ft: 15210 corp: 23/1344b lim: 100 exec/s: 38 rss: 73Mb L: 22/98 MS: 1 ChangeByte- 00:08:43.111 [2024-06-07 22:59:35.204997] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:732690402612218378 len:2571 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.111 [2024-06-07 22:59:35.205026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:43.111 [2024-06-07 22:59:35.205083] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:723401728380766730 len:2571 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.111 [2024-06-07 22:59:35.205097] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:43.111 [2024-06-07 22:59:35.205148] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:723401728380766730 len:2571 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.111 [2024-06-07 22:59:35.205164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:43.111 #39 NEW cov: 12165 ft: 15215 corp: 24/1421b lim: 100 exec/s: 39 rss: 73Mb L: 77/98 MS: 1 ChangeByte- 00:08:43.111 [2024-06-07 22:59:35.245150] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:732690402612218378 len:2571 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.111 [2024-06-07 22:59:35.245181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:43.111 [2024-06-07 22:59:35.245248] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:723401728380766730 len:2571 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.111 [2024-06-07 22:59:35.245260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:43.111 [2024-06-07 22:59:35.245322] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:723401728380766730 len:2571 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.111 [2024-06-07 22:59:35.245336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:43.111 #40 NEW cov: 12165 ft: 15219 corp: 25/1493b lim: 100 exec/s: 40 rss: 73Mb L: 72/98 MS: 1 EraseBytes- 00:08:43.111 [2024-06-07 22:59:35.285288] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:732690402612218378 len:2571 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.111 [2024-06-07 22:59:35.285316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:43.111 [2024-06-07 22:59:35.285373] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:723401728380766730 len:2571 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.111 [2024-06-07 22:59:35.285386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:43.111 [2024-06-07 22:59:35.285441] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:723401728380766730 len:2571 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.111 [2024-06-07 22:59:35.285457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:43.111 #41 NEW cov: 12165 ft: 15221 corp: 26/1564b lim: 100 exec/s: 41 rss: 73Mb L: 71/98 MS: 1 CopyPart- 00:08:43.111 [2024-06-07 22:59:35.325389] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:732690402612218378 len:2571 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.111 [2024-06-07 22:59:35.325418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:43.111 [2024-06-07 22:59:35.325472] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:723401728380766730 len:2571 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.111 [2024-06-07 22:59:35.325485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:43.111 [2024-06-07 22:59:35.325537] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:723401728380766730 len:2571 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.111 [2024-06-07 22:59:35.325554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:43.111 #42 NEW cov: 12165 ft: 15224 corp: 27/1641b lim: 100 exec/s: 42 rss: 73Mb L: 77/98 MS: 1 CrossOver- 00:08:43.111 [2024-06-07 22:59:35.375549] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:723401728380766730 len:2571 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.111 [2024-06-07 22:59:35.375583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:43.111 [2024-06-07 22:59:35.375643] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:723401728380766730 len:2571 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.111 [2024-06-07 22:59:35.375655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:43.112 [2024-06-07 22:59:35.375707] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:9946773765235542538 len:2571 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.112 [2024-06-07 22:59:35.375727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:43.371 #43 NEW cov: 12165 ft: 15233 corp: 28/1706b lim: 100 exec/s: 43 rss: 73Mb L: 65/98 MS: 1 ChangeBit- 00:08:43.371 [2024-06-07 22:59:35.425330] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:3689348814054044467 len:13108 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.371 [2024-06-07 22:59:35.425358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:43.371 #44 NEW cov: 12165 ft: 15237 corp: 29/1728b lim: 100 exec/s: 44 rss: 73Mb L: 22/98 MS: 1 ChangeASCIIInt- 00:08:43.371 [2024-06-07 22:59:35.465950] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:723652326837586442 len:2571 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.371 [2024-06-07 22:59:35.465978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:43.371 [2024-06-07 22:59:35.466027] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:723401728380766730 len:2571 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.371 [2024-06-07 22:59:35.466042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:43.371 [2024-06-07 22:59:35.466063] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:723401728380766730 len:2571 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.371 [2024-06-07 22:59:35.466079] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:43.371 [2024-06-07 22:59:35.466136] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:723401728380766730 len:2569 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.371 [2024-06-07 22:59:35.466152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:43.371 #45 NEW cov: 12165 ft: 15239 corp: 30/1811b lim: 100 exec/s: 45 rss: 73Mb L: 83/98 MS: 1 ChangeBinInt- 00:08:43.371 [2024-06-07 22:59:35.505743] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:3689348814054044467 len:13108 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.371 [2024-06-07 22:59:35.505771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:43.371 [2024-06-07 22:59:35.505835] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:723401728380766730 len:2571 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.371 [2024-06-07 22:59:35.505847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:43.371 #46 NEW cov: 12165 ft: 15253 corp: 31/1863b lim: 100 exec/s: 46 rss: 73Mb L: 52/98 MS: 1 EraseBytes- 00:08:43.371 [2024-06-07 22:59:35.556072] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:3689303557983646515 len:2571 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.371 [2024-06-07 22:59:35.556100] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:43.371 [2024-06-07 22:59:35.556160] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:3689348814051346954 len:13108 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.371 [2024-06-07 22:59:35.556173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:43.371 [2024-06-07 22:59:35.556232] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:723401729071974922 len:2571 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.371 [2024-06-07 22:59:35.556249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:43.371 #47 NEW cov: 12165 ft: 15268 corp: 32/1930b lim: 100 exec/s: 47 rss: 73Mb L: 67/98 MS: 1 CrossOver- 00:08:43.371 [2024-06-07 22:59:35.606245] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:2571 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.372 [2024-06-07 22:59:35.606273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:43.372 [2024-06-07 22:59:35.606333] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:723401728380766730 len:2571 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.372 [2024-06-07 22:59:35.606346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:43.372 [2024-06-07 22:59:35.606402] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:723401728380766730 len:2571 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.372 [2024-06-07 22:59:35.606418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:43.372 #48 NEW cov: 12165 ft: 15275 corp: 33/2003b lim: 100 exec/s: 48 rss: 73Mb L: 73/98 MS: 1 InsertRepeatedBytes- 00:08:43.372 [2024-06-07 22:59:35.646161] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:3689348814054044467 len:13108 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.372 [2024-06-07 22:59:35.646190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:43.372 [2024-06-07 22:59:35.646248] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:723401728380766730 len:2571 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.372 [2024-06-07 22:59:35.646259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:43.632 #49 NEW cov: 12165 ft: 15286 corp: 34/2055b lim: 100 exec/s: 49 rss: 74Mb L: 52/98 MS: 1 ChangeByte- 00:08:43.632 [2024-06-07 22:59:35.696325] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:3689348814054044467 len:13108 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.632 [2024-06-07 22:59:35.696354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:43.632 [2024-06-07 22:59:35.696410] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:723401728380766730 len:2571 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.632 [2024-06-07 22:59:35.696421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:43.632 #50 NEW cov: 12165 ft: 15319 corp: 35/2107b lim: 100 exec/s: 50 rss: 74Mb L: 52/98 MS: 1 ChangeASCIIInt- 00:08:43.632 [2024-06-07 22:59:35.736452] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:3689348814054044467 len:13108 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.632 [2024-06-07 22:59:35.736481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:43.632 [2024-06-07 22:59:35.736536] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:17868022687704414199 len:63480 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.632 [2024-06-07 22:59:35.736552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:43.632 #51 NEW cov: 12165 ft: 15332 corp: 36/2153b lim: 100 exec/s: 51 rss: 74Mb L: 46/98 MS: 1 CopyPart- 00:08:43.632 [2024-06-07 22:59:35.786956] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:732690402612218378 len:2571 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.632 [2024-06-07 22:59:35.786986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:43.632 [2024-06-07 22:59:35.787064] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:723401728380766730 len:2571 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.632 [2024-06-07 22:59:35.787080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:43.632 [2024-06-07 22:59:35.787140] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:13455272144917891770 len:47803 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.632 [2024-06-07 22:59:35.787157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:43.632 [2024-06-07 22:59:35.787219] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:723401731345136138 len:2571 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.632 [2024-06-07 22:59:35.787237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:43.632 #52 NEW cov: 12165 ft: 15348 corp: 37/2250b lim: 100 exec/s: 26 rss: 74Mb L: 97/98 MS: 1 InsertRepeatedBytes- 00:08:43.632 #52 DONE cov: 12165 ft: 15348 corp: 37/2250b lim: 100 exec/s: 26 rss: 74Mb 00:08:43.632 Done 52 runs in 2 second(s) 00:08:43.892 22:59:35 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_24.conf /var/tmp/suppress_nvmf_fuzz 00:08:43.892 22:59:35 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:43.892 22:59:35 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:43.892 22:59:35 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@79 -- # trap - SIGINT SIGTERM EXIT 00:08:43.892 00:08:43.892 real 1m7.808s 00:08:43.892 user 1m39.432s 00:08:43.892 sys 0m9.373s 00:08:43.892 22:59:35 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:43.892 22:59:35 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:08:43.892 ************************************ 00:08:43.892 END TEST nvmf_fuzz 00:08:43.892 ************************************ 00:08:43.892 22:59:36 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:08:43.892 22:59:36 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:08:43.892 22:59:36 llvm_fuzz -- fuzz/llvm.sh@63 -- # run_test vfio_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:08:43.892 22:59:36 llvm_fuzz -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:08:43.892 22:59:36 llvm_fuzz -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:43.892 22:59:36 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:08:43.892 ************************************ 00:08:43.892 START TEST vfio_fuzz 00:08:43.892 ************************************ 00:08:43.892 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:08:43.892 * Looking for test storage... 00:08:43.892 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:43.892 22:59:36 llvm_fuzz.vfio_fuzz -- vfio/run.sh@64 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:08:43.892 22:59:36 llvm_fuzz.vfio_fuzz -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:08:43.892 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:08:43.892 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@34 -- # set -e 00:08:43.892 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:08:43.892 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@36 -- # shopt -s extglob 00:08:43.892 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:08:43.892 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:08:43.892 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:08:43.892 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:08:43.892 22:59:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:43.892 22:59:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:08:43.892 22:59:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:43.892 22:59:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:43.892 22:59:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:08:43.892 22:59:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:43.892 22:59:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:43.892 22:59:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:43.892 22:59:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:43.892 22:59:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:43.892 22:59:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:43.892 22:59:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:43.892 22:59:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:43.892 22:59:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:43.892 22:59:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:43.892 22:59:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:43.892 22:59:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:08:43.892 22:59:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:43.892 22:59:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:08:43.892 22:59:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:08:43.893 22:59:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:08:43.893 22:59:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@22 -- # CONFIG_CET=n 00:08:43.893 22:59:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:43.893 22:59:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:08:43.893 22:59:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:08:43.893 22:59:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:43.893 22:59:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:43.893 22:59:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:08:43.893 22:59:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:08:43.893 22:59:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:08:43.893 22:59:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:08:43.893 22:59:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:08:43.893 22:59:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:08:43.893 22:59:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB=/usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:08:43.893 22:59:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@35 -- # CONFIG_FUZZER=y 00:08:43.893 22:59:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:08:43.893 22:59:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:08:43.893 22:59:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:08:43.893 22:59:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:08:43.893 22:59:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:08:43.893 22:59:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:08:43.893 22:59:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:08:43.893 22:59:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:08:43.893 22:59:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:43.893 22:59:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:08:43.893 22:59:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:08:43.893 22:59:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:08:43.893 22:59:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:08:43.893 22:59:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:43.893 22:59:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:08:43.893 22:59:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:08:43.893 22:59:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:08:43.893 22:59:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:08:43.893 22:59:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:08:43.893 22:59:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:08:43.893 22:59:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:08:43.893 22:59:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:08:43.893 22:59:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:08:43.893 22:59:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:08:43.893 22:59:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:08:43.893 22:59:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:08:43.893 22:59:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:08:43.893 22:59:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:08:43.893 22:59:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:08:43.893 22:59:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:08:43.893 22:59:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@66 -- # CONFIG_SHARED=n 00:08:43.893 22:59:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:08:43.893 22:59:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:08:43.893 22:59:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:43.893 22:59:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@70 -- # CONFIG_FC=n 00:08:43.893 22:59:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:08:43.893 22:59:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:08:43.893 22:59:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:08:43.893 22:59:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:08:43.893 22:59:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:08:43.893 22:59:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:08:43.893 22:59:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:08:43.893 22:59:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:08:43.893 22:59:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:08:43.893 22:59:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:08:43.893 22:59:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:43.893 22:59:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:08:43.893 22:59:36 llvm_fuzz.vfio_fuzz -- common/build_config.sh@83 -- # CONFIG_URING=n 00:08:43.893 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:08:43.893 22:59:36 llvm_fuzz.vfio_fuzz -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:08:44.155 22:59:36 llvm_fuzz.vfio_fuzz -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:08:44.155 22:59:36 llvm_fuzz.vfio_fuzz -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:08:44.155 22:59:36 llvm_fuzz.vfio_fuzz -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:44.155 22:59:36 llvm_fuzz.vfio_fuzz -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:08:44.155 22:59:36 llvm_fuzz.vfio_fuzz -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:08:44.155 22:59:36 llvm_fuzz.vfio_fuzz -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:08:44.155 22:59:36 llvm_fuzz.vfio_fuzz -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:08:44.155 22:59:36 llvm_fuzz.vfio_fuzz -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:08:44.155 22:59:36 llvm_fuzz.vfio_fuzz -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:08:44.155 22:59:36 llvm_fuzz.vfio_fuzz -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:08:44.155 22:59:36 llvm_fuzz.vfio_fuzz -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:08:44.155 22:59:36 llvm_fuzz.vfio_fuzz -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:08:44.155 22:59:36 llvm_fuzz.vfio_fuzz -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:08:44.155 22:59:36 llvm_fuzz.vfio_fuzz -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:08:44.155 #define SPDK_CONFIG_H 00:08:44.155 #define SPDK_CONFIG_APPS 1 00:08:44.155 #define SPDK_CONFIG_ARCH native 00:08:44.155 #undef SPDK_CONFIG_ASAN 00:08:44.155 #undef SPDK_CONFIG_AVAHI 00:08:44.155 #undef SPDK_CONFIG_CET 00:08:44.155 #define SPDK_CONFIG_COVERAGE 1 00:08:44.155 #define SPDK_CONFIG_CROSS_PREFIX 00:08:44.155 #undef SPDK_CONFIG_CRYPTO 00:08:44.155 #undef SPDK_CONFIG_CRYPTO_MLX5 00:08:44.155 #undef SPDK_CONFIG_CUSTOMOCF 00:08:44.155 #undef SPDK_CONFIG_DAOS 00:08:44.155 #define SPDK_CONFIG_DAOS_DIR 00:08:44.155 #define SPDK_CONFIG_DEBUG 1 00:08:44.155 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:08:44.155 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:08:44.155 #define SPDK_CONFIG_DPDK_INC_DIR 00:08:44.155 #define SPDK_CONFIG_DPDK_LIB_DIR 00:08:44.155 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:08:44.155 #undef SPDK_CONFIG_DPDK_UADK 00:08:44.155 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:08:44.155 #define SPDK_CONFIG_EXAMPLES 1 00:08:44.155 #undef SPDK_CONFIG_FC 00:08:44.155 #define SPDK_CONFIG_FC_PATH 00:08:44.155 #define SPDK_CONFIG_FIO_PLUGIN 1 00:08:44.155 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:08:44.155 #undef SPDK_CONFIG_FUSE 00:08:44.155 #define SPDK_CONFIG_FUZZER 1 00:08:44.155 #define SPDK_CONFIG_FUZZER_LIB /usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:08:44.155 #undef SPDK_CONFIG_GOLANG 00:08:44.155 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:08:44.155 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:08:44.155 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:08:44.155 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:08:44.155 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:08:44.155 #undef SPDK_CONFIG_HAVE_LIBBSD 00:08:44.155 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:08:44.155 #define SPDK_CONFIG_IDXD 1 00:08:44.155 #define SPDK_CONFIG_IDXD_KERNEL 1 00:08:44.155 #undef SPDK_CONFIG_IPSEC_MB 00:08:44.155 #define SPDK_CONFIG_IPSEC_MB_DIR 00:08:44.155 #define SPDK_CONFIG_ISAL 1 00:08:44.155 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:08:44.155 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:08:44.155 #define SPDK_CONFIG_LIBDIR 00:08:44.155 #undef SPDK_CONFIG_LTO 00:08:44.155 #define SPDK_CONFIG_MAX_LCORES 00:08:44.155 #define SPDK_CONFIG_NVME_CUSE 1 00:08:44.155 #undef SPDK_CONFIG_OCF 00:08:44.155 #define SPDK_CONFIG_OCF_PATH 00:08:44.155 #define SPDK_CONFIG_OPENSSL_PATH 00:08:44.155 #undef SPDK_CONFIG_PGO_CAPTURE 00:08:44.155 #define SPDK_CONFIG_PGO_DIR 00:08:44.155 #undef SPDK_CONFIG_PGO_USE 00:08:44.155 #define SPDK_CONFIG_PREFIX /usr/local 00:08:44.155 #undef SPDK_CONFIG_RAID5F 00:08:44.155 #undef SPDK_CONFIG_RBD 00:08:44.155 #define SPDK_CONFIG_RDMA 1 00:08:44.155 #define SPDK_CONFIG_RDMA_PROV verbs 00:08:44.155 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:08:44.155 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:08:44.155 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:08:44.155 #undef SPDK_CONFIG_SHARED 00:08:44.155 #undef SPDK_CONFIG_SMA 00:08:44.155 #define SPDK_CONFIG_TESTS 1 00:08:44.155 #undef SPDK_CONFIG_TSAN 00:08:44.155 #define SPDK_CONFIG_UBLK 1 00:08:44.155 #define SPDK_CONFIG_UBSAN 1 00:08:44.155 #undef SPDK_CONFIG_UNIT_TESTS 00:08:44.155 #undef SPDK_CONFIG_URING 00:08:44.155 #define SPDK_CONFIG_URING_PATH 00:08:44.155 #undef SPDK_CONFIG_URING_ZNS 00:08:44.155 #undef SPDK_CONFIG_USDT 00:08:44.155 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:08:44.155 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:08:44.155 #define SPDK_CONFIG_VFIO_USER 1 00:08:44.155 #define SPDK_CONFIG_VFIO_USER_DIR 00:08:44.155 #define SPDK_CONFIG_VHOST 1 00:08:44.155 #define SPDK_CONFIG_VIRTIO 1 00:08:44.155 #undef SPDK_CONFIG_VTUNE 00:08:44.155 #define SPDK_CONFIG_VTUNE_DIR 00:08:44.155 #define SPDK_CONFIG_WERROR 1 00:08:44.155 #define SPDK_CONFIG_WPDK_DIR 00:08:44.155 #undef SPDK_CONFIG_XNVME 00:08:44.155 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:08:44.155 22:59:36 llvm_fuzz.vfio_fuzz -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:08:44.155 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:08:44.155 22:59:36 llvm_fuzz.vfio_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:44.155 22:59:36 llvm_fuzz.vfio_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:44.155 22:59:36 llvm_fuzz.vfio_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- paths/export.sh@5 -- # export PATH 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- pm/common@64 -- # TEST_TAG=N/A 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- pm/common@68 -- # uname -s 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- pm/common@68 -- # PM_OS=Linux 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- pm/common@76 -- # SUDO[0]= 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- pm/common@76 -- # SUDO[1]='sudo -E' 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- pm/common@81 -- # [[ Linux == Linux ]] 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power ]] 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@58 -- # : 0 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@62 -- # : 0 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@64 -- # : 0 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@66 -- # : 1 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@68 -- # : 0 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@70 -- # : 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@72 -- # : 0 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@74 -- # : 0 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@76 -- # : 0 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@78 -- # : 0 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@80 -- # : 0 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@82 -- # : 0 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@84 -- # : 0 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@86 -- # : 0 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@88 -- # : 0 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@90 -- # : 0 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@92 -- # : 0 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@94 -- # : 0 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@96 -- # : 0 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@98 -- # : 1 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@100 -- # : 1 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@102 -- # : rdma 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@104 -- # : 0 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@106 -- # : 0 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@108 -- # : 0 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@110 -- # : 0 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@112 -- # : 0 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@114 -- # : 0 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@116 -- # : 0 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@118 -- # : 0 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@120 -- # : 0 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@122 -- # : 1 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@124 -- # : 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@126 -- # : 0 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@128 -- # : 0 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@130 -- # : 0 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@132 -- # : 0 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@134 -- # : 0 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@136 -- # : 0 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@138 -- # : 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@140 -- # : true 00:08:44.156 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:08:44.157 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@142 -- # : 0 00:08:44.157 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:08:44.157 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@144 -- # : 0 00:08:44.157 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:08:44.157 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@146 -- # : 0 00:08:44.157 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:08:44.157 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@148 -- # : 0 00:08:44.157 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:08:44.157 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@150 -- # : 0 00:08:44.157 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:08:44.157 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@152 -- # : 0 00:08:44.157 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:08:44.157 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@154 -- # : 00:08:44.157 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:08:44.157 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@156 -- # : 0 00:08:44.157 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:08:44.157 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@158 -- # : 0 00:08:44.157 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:08:44.157 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@160 -- # : 0 00:08:44.157 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:08:44.157 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@162 -- # : 0 00:08:44.157 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:08:44.157 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@164 -- # : 0 00:08:44.157 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:08:44.157 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@167 -- # : 00:08:44.157 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:08:44.157 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@169 -- # : 0 00:08:44.157 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:08:44.157 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@171 -- # : 0 00:08:44.157 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:08:44.157 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:08:44.157 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:08:44.157 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:08:44.157 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:08:44.157 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:44.157 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:44.157 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:44.157 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:44.157 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:08:44.157 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:08:44.157 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:08:44.157 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:08:44.157 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:08:44.157 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:08:44.157 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:44.157 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:44.157 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:44.157 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:44.157 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:08:44.157 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:08:44.157 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@200 -- # cat 00:08:44.157 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:08:44.157 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:44.157 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:44.157 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:44.157 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:44.157 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:08:44.157 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:08:44.157 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:08:44.157 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:08:44.157 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:08:44.157 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:08:44.157 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:44.157 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:44.157 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:44.157 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:44.157 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:44.157 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:44.157 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:44.157 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:44.157 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:08:44.157 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@263 -- # export valgrind= 00:08:44.157 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@263 -- # valgrind= 00:08:44.157 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@269 -- # uname -s 00:08:44.157 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:08:44.157 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:08:44.157 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:08:44.157 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:08:44.157 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:08:44.157 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:08:44.157 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@279 -- # MAKE=make 00:08:44.157 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j112 00:08:44.157 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:08:44.157 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:08:44.157 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:08:44.157 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@299 -- # TEST_MODE= 00:08:44.157 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@318 -- # [[ -z 4160822 ]] 00:08:44.157 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@318 -- # kill -0 4160822 00:08:44.157 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1679 -- # set_test_storage 2147483648 00:08:44.157 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:08:44.157 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:08:44.157 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@331 -- # local mount target_dir 00:08:44.157 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:08:44.157 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.OwDOal 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio /tmp/spdk.OwDOal/tests/vfio /tmp/spdk.OwDOal 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@327 -- # df -T 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=956952576 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=4327477248 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=49264893952 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=61742280704 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=12477386752 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=30866427904 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=30871138304 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=4710400 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=12342145024 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=12348456960 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=6311936 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=30869696512 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=30871142400 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=1445888 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=6174220288 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=6174224384 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:08:44.158 * Looking for test storage... 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@368 -- # local target_space new_size 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@372 -- # mount=/ 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@374 -- # target_space=49264893952 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@381 -- # new_size=14691979264 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:44.158 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@389 -- # return 0 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1681 -- # set -o errtrace 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1682 -- # shopt -s extdebug 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1683 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1685 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1686 -- # true 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1688 -- # xtrace_fd 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@27 -- # exec 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@29 -- # exec 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@31 -- # xtrace_restore 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@18 -- # set -x 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- vfio/run.sh@65 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/../common.sh 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- ../common.sh@8 -- # pids=() 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- vfio/run.sh@67 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- vfio/run.sh@68 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- vfio/run.sh@68 -- # fuzz_num=7 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- vfio/run.sh@69 -- # (( fuzz_num != 0 )) 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- vfio/run.sh@71 -- # trap 'cleanup /tmp/vfio-user-* /var/tmp/suppress_vfio_fuzz; exit 1' SIGINT SIGTERM EXIT 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- vfio/run.sh@74 -- # mem_size=0 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- vfio/run.sh@75 -- # [[ 1 -eq 1 ]] 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- vfio/run.sh@76 -- # start_llvm_fuzz_short 7 1 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- ../common.sh@69 -- # local fuzz_num=7 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- ../common.sh@70 -- # local time=1 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i = 0 )) 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=0 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-0 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-0/domain/1 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-0/domain/2 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-0/fuzz_vfio_json.conf 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-0 /tmp/vfio-user-0/domain/1 /tmp/vfio-user-0/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:08:44.158 22:59:36 llvm_fuzz.vfio_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-0/domain/1%; 00:08:44.158 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-0/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:44.159 22:59:36 llvm_fuzz.vfio_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:44.159 22:59:36 llvm_fuzz.vfio_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:44.159 22:59:36 llvm_fuzz.vfio_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-0/domain/1 -c /tmp/vfio-user-0/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 -Y /tmp/vfio-user-0/domain/2 -r /tmp/vfio-user-0/spdk0.sock -Z 0 00:08:44.159 [2024-06-07 22:59:36.362021] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:08:44.159 [2024-06-07 22:59:36.362099] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4160945 ] 00:08:44.159 EAL: No free 2048 kB hugepages reported on node 1 00:08:44.418 [2024-06-07 22:59:36.484945] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.418 [2024-06-07 22:59:36.571415] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.678 INFO: Running with entropic power schedule (0xFF, 100). 00:08:44.678 INFO: Seed: 1187919837 00:08:44.678 INFO: Loaded 1 modules (354788 inline 8-bit counters): 354788 [0x29647cc, 0x29bb1b0), 00:08:44.678 INFO: Loaded 1 PC tables (354788 PCs): 354788 [0x29bb1b0,0x2f24ff0), 00:08:44.678 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:08:44.678 INFO: A corpus is not provided, starting from an empty corpus 00:08:44.678 #2 INITED exec/s: 0 rss: 65Mb 00:08:44.678 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:44.678 This may also happen if the target rejected all inputs we tried so far 00:08:44.678 [2024-06-07 22:59:36.834053] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-0/domain/2: enabling controller 00:08:45.196 NEW_FUNC[1/646]: 0x4828a0 in fuzz_vfio_user_region_rw /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:84 00:08:45.196 NEW_FUNC[2/646]: 0x4883b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:45.196 #5 NEW cov: 10907 ft: 10374 corp: 2/7b lim: 6 exec/s: 0 rss: 70Mb L: 6/6 MS: 3 ChangeByte-CrossOver-InsertRepeatedBytes- 00:08:45.482 NEW_FUNC[1/1]: 0x1a40550 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:601 00:08:45.482 #21 NEW cov: 10943 ft: 14610 corp: 3/13b lim: 6 exec/s: 0 rss: 72Mb L: 6/6 MS: 1 ChangeBit- 00:08:45.764 #22 NEW cov: 10946 ft: 15583 corp: 4/19b lim: 6 exec/s: 22 rss: 73Mb L: 6/6 MS: 1 CrossOver- 00:08:46.023 #23 NEW cov: 10946 ft: 16013 corp: 5/25b lim: 6 exec/s: 23 rss: 73Mb L: 6/6 MS: 1 ShuffleBytes- 00:08:46.282 #24 NEW cov: 10946 ft: 16327 corp: 6/31b lim: 6 exec/s: 24 rss: 73Mb L: 6/6 MS: 1 ChangeBinInt- 00:08:46.541 #25 NEW cov: 10953 ft: 16490 corp: 7/37b lim: 6 exec/s: 25 rss: 73Mb L: 6/6 MS: 1 ShuffleBytes- 00:08:46.800 #26 NEW cov: 10953 ft: 16822 corp: 8/43b lim: 6 exec/s: 13 rss: 73Mb L: 6/6 MS: 1 ChangeBinInt- 00:08:46.800 #26 DONE cov: 10953 ft: 16822 corp: 8/43b lim: 6 exec/s: 13 rss: 73Mb 00:08:46.800 Done 26 runs in 2 second(s) 00:08:46.800 [2024-06-07 22:59:38.910812] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-0/domain/2: disabling controller 00:08:47.060 22:59:39 llvm_fuzz.vfio_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-0 /var/tmp/suppress_vfio_fuzz 00:08:47.060 22:59:39 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:47.060 22:59:39 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:47.060 22:59:39 llvm_fuzz.vfio_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:08:47.060 22:59:39 llvm_fuzz.vfio_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=1 00:08:47.060 22:59:39 llvm_fuzz.vfio_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:47.060 22:59:39 llvm_fuzz.vfio_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:47.060 22:59:39 llvm_fuzz.vfio_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:08:47.060 22:59:39 llvm_fuzz.vfio_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-1 00:08:47.060 22:59:39 llvm_fuzz.vfio_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-1/domain/1 00:08:47.060 22:59:39 llvm_fuzz.vfio_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-1/domain/2 00:08:47.060 22:59:39 llvm_fuzz.vfio_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-1/fuzz_vfio_json.conf 00:08:47.060 22:59:39 llvm_fuzz.vfio_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:47.060 22:59:39 llvm_fuzz.vfio_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:47.060 22:59:39 llvm_fuzz.vfio_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-1 /tmp/vfio-user-1/domain/1 /tmp/vfio-user-1/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:08:47.060 22:59:39 llvm_fuzz.vfio_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-1/domain/1%; 00:08:47.060 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-1/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:47.060 22:59:39 llvm_fuzz.vfio_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:47.060 22:59:39 llvm_fuzz.vfio_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:47.060 22:59:39 llvm_fuzz.vfio_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-1/domain/1 -c /tmp/vfio-user-1/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 -Y /tmp/vfio-user-1/domain/2 -r /tmp/vfio-user-1/spdk1.sock -Z 1 00:08:47.060 [2024-06-07 22:59:39.229802] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:08:47.060 [2024-06-07 22:59:39.229874] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4161397 ] 00:08:47.060 EAL: No free 2048 kB hugepages reported on node 1 00:08:47.320 [2024-06-07 22:59:39.352366] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.320 [2024-06-07 22:59:39.438749] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.579 INFO: Running with entropic power schedule (0xFF, 100). 00:08:47.579 INFO: Seed: 4050881247 00:08:47.579 INFO: Loaded 1 modules (354788 inline 8-bit counters): 354788 [0x29647cc, 0x29bb1b0), 00:08:47.579 INFO: Loaded 1 PC tables (354788 PCs): 354788 [0x29bb1b0,0x2f24ff0), 00:08:47.579 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:08:47.579 INFO: A corpus is not provided, starting from an empty corpus 00:08:47.579 #2 INITED exec/s: 0 rss: 65Mb 00:08:47.579 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:47.579 This may also happen if the target rejected all inputs we tried so far 00:08:47.579 [2024-06-07 22:59:39.693710] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-1/domain/2: enabling controller 00:08:47.579 [2024-06-07 22:59:39.773780] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:47.579 [2024-06-07 22:59:39.773811] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:47.579 [2024-06-07 22:59:39.773836] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:48.097 NEW_FUNC[1/648]: 0x482e40 in fuzz_vfio_user_version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:71 00:08:48.097 NEW_FUNC[2/648]: 0x4883b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:48.097 #107 NEW cov: 10907 ft: 10449 corp: 2/5b lim: 4 exec/s: 0 rss: 71Mb L: 4/4 MS: 5 InsertByte-CopyPart-CopyPart-InsertByte-CopyPart- 00:08:48.357 [2024-06-07 22:59:40.416215] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:48.357 [2024-06-07 22:59:40.416253] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:48.357 [2024-06-07 22:59:40.416281] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:48.357 NEW_FUNC[1/1]: 0x1a40550 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:601 00:08:48.357 #108 NEW cov: 10942 ft: 12977 corp: 3/9b lim: 4 exec/s: 0 rss: 73Mb L: 4/4 MS: 1 ChangeByte- 00:08:48.616 [2024-06-07 22:59:40.648947] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:48.616 [2024-06-07 22:59:40.648978] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:48.616 [2024-06-07 22:59:40.649002] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:48.616 #109 NEW cov: 10942 ft: 14541 corp: 4/13b lim: 4 exec/s: 109 rss: 74Mb L: 4/4 MS: 1 ShuffleBytes- 00:08:48.616 [2024-06-07 22:59:40.881466] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:48.616 [2024-06-07 22:59:40.881495] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:48.616 [2024-06-07 22:59:40.881518] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:48.875 #110 NEW cov: 10942 ft: 15023 corp: 5/17b lim: 4 exec/s: 110 rss: 74Mb L: 4/4 MS: 1 ShuffleBytes- 00:08:48.875 [2024-06-07 22:59:41.112866] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:48.875 [2024-06-07 22:59:41.112894] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:48.875 [2024-06-07 22:59:41.112917] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:49.134 #111 NEW cov: 10942 ft: 15086 corp: 6/21b lim: 4 exec/s: 111 rss: 74Mb L: 4/4 MS: 1 CopyPart- 00:08:49.134 [2024-06-07 22:59:41.345889] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:49.134 [2024-06-07 22:59:41.345917] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:49.134 [2024-06-07 22:59:41.345940] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:49.392 #113 NEW cov: 10949 ft: 15225 corp: 7/25b lim: 4 exec/s: 113 rss: 74Mb L: 4/4 MS: 2 EraseBytes-InsertByte- 00:08:49.392 [2024-06-07 22:59:41.569552] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:49.392 [2024-06-07 22:59:41.569587] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:49.392 [2024-06-07 22:59:41.569610] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:49.651 #114 NEW cov: 10949 ft: 15596 corp: 8/29b lim: 4 exec/s: 57 rss: 74Mb L: 4/4 MS: 1 ChangeByte- 00:08:49.651 #114 DONE cov: 10949 ft: 15596 corp: 8/29b lim: 4 exec/s: 57 rss: 74Mb 00:08:49.651 Done 114 runs in 2 second(s) 00:08:49.651 [2024-06-07 22:59:41.730809] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-1/domain/2: disabling controller 00:08:49.910 22:59:42 llvm_fuzz.vfio_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-1 /var/tmp/suppress_vfio_fuzz 00:08:49.910 22:59:42 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:49.910 22:59:42 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:49.910 22:59:42 llvm_fuzz.vfio_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:08:49.910 22:59:42 llvm_fuzz.vfio_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=2 00:08:49.910 22:59:42 llvm_fuzz.vfio_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:49.910 22:59:42 llvm_fuzz.vfio_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:49.910 22:59:42 llvm_fuzz.vfio_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:08:49.910 22:59:42 llvm_fuzz.vfio_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-2 00:08:49.911 22:59:42 llvm_fuzz.vfio_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-2/domain/1 00:08:49.911 22:59:42 llvm_fuzz.vfio_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-2/domain/2 00:08:49.911 22:59:42 llvm_fuzz.vfio_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-2/fuzz_vfio_json.conf 00:08:49.911 22:59:42 llvm_fuzz.vfio_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:49.911 22:59:42 llvm_fuzz.vfio_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:49.911 22:59:42 llvm_fuzz.vfio_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-2 /tmp/vfio-user-2/domain/1 /tmp/vfio-user-2/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:08:49.911 22:59:42 llvm_fuzz.vfio_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-2/domain/1%; 00:08:49.911 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-2/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:49.911 22:59:42 llvm_fuzz.vfio_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:49.911 22:59:42 llvm_fuzz.vfio_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:49.911 22:59:42 llvm_fuzz.vfio_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-2/domain/1 -c /tmp/vfio-user-2/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 -Y /tmp/vfio-user-2/domain/2 -r /tmp/vfio-user-2/spdk2.sock -Z 2 00:08:49.911 [2024-06-07 22:59:42.057473] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:08:49.911 [2024-06-07 22:59:42.057550] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4161815 ] 00:08:49.911 EAL: No free 2048 kB hugepages reported on node 1 00:08:49.911 [2024-06-07 22:59:42.180638] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.170 [2024-06-07 22:59:42.267707] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.429 INFO: Running with entropic power schedule (0xFF, 100). 00:08:50.429 INFO: Seed: 2596913313 00:08:50.429 INFO: Loaded 1 modules (354788 inline 8-bit counters): 354788 [0x29647cc, 0x29bb1b0), 00:08:50.429 INFO: Loaded 1 PC tables (354788 PCs): 354788 [0x29bb1b0,0x2f24ff0), 00:08:50.429 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:08:50.429 INFO: A corpus is not provided, starting from an empty corpus 00:08:50.429 #2 INITED exec/s: 0 rss: 65Mb 00:08:50.429 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:50.429 This may also happen if the target rejected all inputs we tried so far 00:08:50.429 [2024-06-07 22:59:42.531244] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-2/domain/2: enabling controller 00:08:50.430 [2024-06-07 22:59:42.584869] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:50.949 NEW_FUNC[1/647]: 0x483820 in fuzz_vfio_user_get_region_info /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:103 00:08:50.949 NEW_FUNC[2/647]: 0x4883b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:50.949 #5 NEW cov: 10894 ft: 10769 corp: 2/9b lim: 8 exec/s: 0 rss: 71Mb L: 8/8 MS: 3 ChangeBit-InsertRepeatedBytes-CopyPart- 00:08:50.949 [2024-06-07 22:59:43.221760] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:51.208 NEW_FUNC[1/1]: 0x1a40550 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:601 00:08:51.208 #21 NEW cov: 10925 ft: 13655 corp: 3/17b lim: 8 exec/s: 0 rss: 73Mb L: 8/8 MS: 1 CopyPart- 00:08:51.208 [2024-06-07 22:59:43.465901] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:51.467 #22 NEW cov: 10925 ft: 14412 corp: 4/25b lim: 8 exec/s: 22 rss: 74Mb L: 8/8 MS: 1 ChangeByte- 00:08:51.467 [2024-06-07 22:59:43.708706] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:51.725 #28 NEW cov: 10925 ft: 14497 corp: 5/33b lim: 8 exec/s: 28 rss: 74Mb L: 8/8 MS: 1 ChangeBinInt- 00:08:51.725 [2024-06-07 22:59:43.953617] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:51.984 #29 NEW cov: 10925 ft: 14970 corp: 6/41b lim: 8 exec/s: 29 rss: 74Mb L: 8/8 MS: 1 CrossOver- 00:08:51.984 [2024-06-07 22:59:44.199283] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:52.242 #30 NEW cov: 10932 ft: 15044 corp: 7/49b lim: 8 exec/s: 30 rss: 74Mb L: 8/8 MS: 1 ChangeByte- 00:08:52.242 [2024-06-07 22:59:44.434058] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:52.501 #31 NEW cov: 10932 ft: 15171 corp: 8/57b lim: 8 exec/s: 15 rss: 74Mb L: 8/8 MS: 1 CopyPart- 00:08:52.501 #31 DONE cov: 10932 ft: 15171 corp: 8/57b lim: 8 exec/s: 15 rss: 74Mb 00:08:52.501 Done 31 runs in 2 second(s) 00:08:52.501 [2024-06-07 22:59:44.598790] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-2/domain/2: disabling controller 00:08:52.760 22:59:44 llvm_fuzz.vfio_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-2 /var/tmp/suppress_vfio_fuzz 00:08:52.760 22:59:44 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:52.760 22:59:44 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:52.760 22:59:44 llvm_fuzz.vfio_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:08:52.760 22:59:44 llvm_fuzz.vfio_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=3 00:08:52.760 22:59:44 llvm_fuzz.vfio_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:52.760 22:59:44 llvm_fuzz.vfio_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:52.760 22:59:44 llvm_fuzz.vfio_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:08:52.760 22:59:44 llvm_fuzz.vfio_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-3 00:08:52.760 22:59:44 llvm_fuzz.vfio_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-3/domain/1 00:08:52.760 22:59:44 llvm_fuzz.vfio_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-3/domain/2 00:08:52.760 22:59:44 llvm_fuzz.vfio_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-3/fuzz_vfio_json.conf 00:08:52.760 22:59:44 llvm_fuzz.vfio_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:52.760 22:59:44 llvm_fuzz.vfio_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:52.760 22:59:44 llvm_fuzz.vfio_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-3 /tmp/vfio-user-3/domain/1 /tmp/vfio-user-3/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:08:52.760 22:59:44 llvm_fuzz.vfio_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-3/domain/1%; 00:08:52.760 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-3/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:52.760 22:59:44 llvm_fuzz.vfio_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:52.760 22:59:44 llvm_fuzz.vfio_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:52.760 22:59:44 llvm_fuzz.vfio_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-3/domain/1 -c /tmp/vfio-user-3/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 -Y /tmp/vfio-user-3/domain/2 -r /tmp/vfio-user-3/spdk3.sock -Z 3 00:08:52.760 [2024-06-07 22:59:44.922197] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:08:52.760 [2024-06-07 22:59:44.922267] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4162312 ] 00:08:52.760 EAL: No free 2048 kB hugepages reported on node 1 00:08:53.018 [2024-06-07 22:59:45.043998] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.018 [2024-06-07 22:59:45.132460] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.276 INFO: Running with entropic power schedule (0xFF, 100). 00:08:53.276 INFO: Seed: 1155940493 00:08:53.276 INFO: Loaded 1 modules (354788 inline 8-bit counters): 354788 [0x29647cc, 0x29bb1b0), 00:08:53.276 INFO: Loaded 1 PC tables (354788 PCs): 354788 [0x29bb1b0,0x2f24ff0), 00:08:53.276 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:08:53.276 INFO: A corpus is not provided, starting from an empty corpus 00:08:53.276 #2 INITED exec/s: 0 rss: 65Mb 00:08:53.276 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:53.276 This may also happen if the target rejected all inputs we tried so far 00:08:53.276 [2024-06-07 22:59:45.389171] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-3/domain/2: enabling controller 00:08:53.276 [2024-06-07 22:59:45.547463] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to memory map DMA region [0xff00000000000900, 0xff000100000008ff) fd=323 offset=0x63630a0000000000 prot=0x3: Permission denied 00:08:53.276 [2024-06-07 22:59:45.547496] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to add DMA region [0xff00000000000900, 0xff000100000008ff) offset=0x63630a0000000000 flags=0x3: Permission denied 00:08:53.276 [2024-06-07 22:59:45.547511] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: msg0: cmd 2 failed: Permission denied 00:08:53.276 [2024-06-07 22:59:45.547543] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:08:53.793 NEW_FUNC[1/648]: 0x483f00 in fuzz_vfio_user_dma_map /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:124 00:08:53.793 NEW_FUNC[2/648]: 0x4883b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:53.793 #77 NEW cov: 10909 ft: 10691 corp: 2/33b lim: 32 exec/s: 0 rss: 70Mb L: 32/32 MS: 5 InsertRepeatedBytes-EraseBytes-InsertRepeatedBytes-ChangeBinInt-InsertRepeatedBytes- 00:08:53.793 [2024-06-07 22:59:46.058901] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to memory map DMA region [0xff00000000000900, 0xff000100000008ff) fd=325 offset=0x43630a0000000000 prot=0x3: Permission denied 00:08:53.793 [2024-06-07 22:59:46.058945] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to add DMA region [0xff00000000000900, 0xff000100000008ff) offset=0x43630a0000000000 flags=0x3: Permission denied 00:08:53.793 [2024-06-07 22:59:46.058961] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: msg0: cmd 2 failed: Permission denied 00:08:53.793 [2024-06-07 22:59:46.058983] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:08:54.051 NEW_FUNC[1/1]: 0x1a40550 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:601 00:08:54.051 #88 NEW cov: 10940 ft: 13730 corp: 3/65b lim: 32 exec/s: 0 rss: 72Mb L: 32/32 MS: 1 ChangeBit- 00:08:54.051 [2024-06-07 22:59:46.298450] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to memory map DMA region [0xff00000062000900, 0xff000100620008ff) fd=325 offset=0x43630a0000000000 prot=0x3: Permission denied 00:08:54.051 [2024-06-07 22:59:46.298480] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to add DMA region [0xff00000062000900, 0xff000100620008ff) offset=0x43630a0000000000 flags=0x3: Permission denied 00:08:54.051 [2024-06-07 22:59:46.298495] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: msg0: cmd 2 failed: Permission denied 00:08:54.051 [2024-06-07 22:59:46.298517] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:08:54.309 #89 NEW cov: 10940 ft: 14047 corp: 4/97b lim: 32 exec/s: 89 rss: 73Mb L: 32/32 MS: 1 ChangeByte- 00:08:54.309 [2024-06-07 22:59:46.527003] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to memory map DMA region [0xff00000000000900, 0xff000000000108ff) fd=325 offset=0x63630a0009000001 prot=0x3: Permission denied 00:08:54.309 [2024-06-07 22:59:46.527032] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to add DMA region [0xff00000000000900, 0xff000000000108ff) offset=0x63630a0009000001 flags=0x3: Permission denied 00:08:54.309 [2024-06-07 22:59:46.527047] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: msg0: cmd 2 failed: Permission denied 00:08:54.309 [2024-06-07 22:59:46.527069] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:08:54.568 #95 NEW cov: 10940 ft: 14689 corp: 5/129b lim: 32 exec/s: 95 rss: 73Mb L: 32/32 MS: 1 CopyPart- 00:08:54.568 [2024-06-07 22:59:46.763254] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: DMA region size 648518350636384255 > max 8796093022208 00:08:54.568 [2024-06-07 22:59:46.763284] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to add DMA region [0xff00000062000900, 0x8000001620108ff) offset=0x43ffff0000000000 flags=0x3: No space left on device 00:08:54.568 [2024-06-07 22:59:46.763300] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: msg0: cmd 2 failed: No space left on device 00:08:54.568 [2024-06-07 22:59:46.763322] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:08:54.827 #96 NEW cov: 10940 ft: 14824 corp: 6/161b lim: 32 exec/s: 96 rss: 73Mb L: 32/32 MS: 1 CrossOver- 00:08:55.086 #106 NEW cov: 10951 ft: 15921 corp: 7/193b lim: 32 exec/s: 106 rss: 73Mb L: 32/32 MS: 5 ShuffleBytes-InsertRepeatedBytes-CopyPart-ChangeBit-CopyPart- 00:08:55.086 [2024-06-07 22:59:47.236847] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to memory map DMA region [0xff00ff0000006200, 0xff010000000061ff) fd=325 offset=0x43630a0000000000 prot=0x3: Permission denied 00:08:55.086 [2024-06-07 22:59:47.236877] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to add DMA region [0xff00ff0000006200, 0xff010000000061ff) offset=0x43630a0000000000 flags=0x3: Permission denied 00:08:55.086 [2024-06-07 22:59:47.236892] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: msg0: cmd 2 failed: Permission denied 00:08:55.086 [2024-06-07 22:59:47.236914] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:08:55.345 #112 NEW cov: 10951 ft: 16152 corp: 8/225b lim: 32 exec/s: 56 rss: 73Mb L: 32/32 MS: 1 CopyPart- 00:08:55.345 #112 DONE cov: 10951 ft: 16152 corp: 8/225b lim: 32 exec/s: 56 rss: 73Mb 00:08:55.345 Done 112 runs in 2 second(s) 00:08:55.345 [2024-06-07 22:59:47.404803] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-3/domain/2: disabling controller 00:08:55.604 22:59:47 llvm_fuzz.vfio_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-3 /var/tmp/suppress_vfio_fuzz 00:08:55.604 22:59:47 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:55.604 22:59:47 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:55.604 22:59:47 llvm_fuzz.vfio_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:08:55.604 22:59:47 llvm_fuzz.vfio_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=4 00:08:55.604 22:59:47 llvm_fuzz.vfio_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:55.604 22:59:47 llvm_fuzz.vfio_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:55.604 22:59:47 llvm_fuzz.vfio_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:08:55.604 22:59:47 llvm_fuzz.vfio_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-4 00:08:55.604 22:59:47 llvm_fuzz.vfio_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-4/domain/1 00:08:55.604 22:59:47 llvm_fuzz.vfio_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-4/domain/2 00:08:55.604 22:59:47 llvm_fuzz.vfio_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-4/fuzz_vfio_json.conf 00:08:55.604 22:59:47 llvm_fuzz.vfio_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:55.604 22:59:47 llvm_fuzz.vfio_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:55.604 22:59:47 llvm_fuzz.vfio_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-4 /tmp/vfio-user-4/domain/1 /tmp/vfio-user-4/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:08:55.604 22:59:47 llvm_fuzz.vfio_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-4/domain/1%; 00:08:55.604 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-4/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:55.604 22:59:47 llvm_fuzz.vfio_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:55.604 22:59:47 llvm_fuzz.vfio_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:55.604 22:59:47 llvm_fuzz.vfio_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-4/domain/1 -c /tmp/vfio-user-4/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 -Y /tmp/vfio-user-4/domain/2 -r /tmp/vfio-user-4/spdk4.sock -Z 4 00:08:55.604 [2024-06-07 22:59:47.725747] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:08:55.604 [2024-06-07 22:59:47.725833] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4162849 ] 00:08:55.604 EAL: No free 2048 kB hugepages reported on node 1 00:08:55.604 [2024-06-07 22:59:47.851675] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.862 [2024-06-07 22:59:47.941800] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.862 INFO: Running with entropic power schedule (0xFF, 100). 00:08:55.862 INFO: Seed: 3961943589 00:08:56.119 INFO: Loaded 1 modules (354788 inline 8-bit counters): 354788 [0x29647cc, 0x29bb1b0), 00:08:56.119 INFO: Loaded 1 PC tables (354788 PCs): 354788 [0x29bb1b0,0x2f24ff0), 00:08:56.119 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:08:56.119 INFO: A corpus is not provided, starting from an empty corpus 00:08:56.119 #2 INITED exec/s: 0 rss: 65Mb 00:08:56.119 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:56.119 This may also happen if the target rejected all inputs we tried so far 00:08:56.119 [2024-06-07 22:59:48.194581] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-4/domain/2: enabling controller 00:08:56.684 NEW_FUNC[1/647]: 0x484780 in fuzz_vfio_user_dma_unmap /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:144 00:08:56.684 NEW_FUNC[2/647]: 0x4883b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:56.684 #191 NEW cov: 10899 ft: 10817 corp: 2/33b lim: 32 exec/s: 0 rss: 70Mb L: 32/32 MS: 4 ChangeBit-ShuffleBytes-InsertRepeatedBytes-InsertRepeatedBytes- 00:08:56.684 #197 NEW cov: 10918 ft: 13645 corp: 3/65b lim: 32 exec/s: 0 rss: 72Mb L: 32/32 MS: 1 ChangeBit- 00:08:56.942 NEW_FUNC[1/1]: 0x1a40550 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:601 00:08:56.942 #198 NEW cov: 10935 ft: 14297 corp: 4/97b lim: 32 exec/s: 0 rss: 73Mb L: 32/32 MS: 1 ShuffleBytes- 00:08:57.201 #199 NEW cov: 10935 ft: 14548 corp: 5/129b lim: 32 exec/s: 199 rss: 73Mb L: 32/32 MS: 1 ChangeBinInt- 00:08:57.459 #200 NEW cov: 10935 ft: 14655 corp: 6/161b lim: 32 exec/s: 200 rss: 73Mb L: 32/32 MS: 1 ChangeBit- 00:08:57.459 #201 NEW cov: 10935 ft: 15752 corp: 7/193b lim: 32 exec/s: 201 rss: 73Mb L: 32/32 MS: 1 CopyPart- 00:08:57.718 #202 NEW cov: 10935 ft: 16246 corp: 8/225b lim: 32 exec/s: 202 rss: 73Mb L: 32/32 MS: 1 CopyPart- 00:08:57.977 #203 NEW cov: 10942 ft: 16751 corp: 9/257b lim: 32 exec/s: 101 rss: 73Mb L: 32/32 MS: 1 CrossOver- 00:08:57.977 #203 DONE cov: 10942 ft: 16751 corp: 9/257b lim: 32 exec/s: 101 rss: 73Mb 00:08:57.977 Done 203 runs in 2 second(s) 00:08:57.977 [2024-06-07 22:59:50.178812] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-4/domain/2: disabling controller 00:08:58.236 22:59:50 llvm_fuzz.vfio_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-4 /var/tmp/suppress_vfio_fuzz 00:08:58.236 22:59:50 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:58.236 22:59:50 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:58.236 22:59:50 llvm_fuzz.vfio_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:08:58.236 22:59:50 llvm_fuzz.vfio_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=5 00:08:58.236 22:59:50 llvm_fuzz.vfio_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:58.236 22:59:50 llvm_fuzz.vfio_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:58.236 22:59:50 llvm_fuzz.vfio_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:08:58.236 22:59:50 llvm_fuzz.vfio_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-5 00:08:58.236 22:59:50 llvm_fuzz.vfio_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-5/domain/1 00:08:58.236 22:59:50 llvm_fuzz.vfio_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-5/domain/2 00:08:58.236 22:59:50 llvm_fuzz.vfio_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-5/fuzz_vfio_json.conf 00:08:58.236 22:59:50 llvm_fuzz.vfio_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:58.236 22:59:50 llvm_fuzz.vfio_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:58.236 22:59:50 llvm_fuzz.vfio_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-5 /tmp/vfio-user-5/domain/1 /tmp/vfio-user-5/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:08:58.236 22:59:50 llvm_fuzz.vfio_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-5/domain/1%; 00:08:58.236 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-5/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:58.236 22:59:50 llvm_fuzz.vfio_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:58.236 22:59:50 llvm_fuzz.vfio_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:58.236 22:59:50 llvm_fuzz.vfio_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-5/domain/1 -c /tmp/vfio-user-5/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 -Y /tmp/vfio-user-5/domain/2 -r /tmp/vfio-user-5/spdk5.sock -Z 5 00:08:58.236 [2024-06-07 22:59:50.500110] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:08:58.236 [2024-06-07 22:59:50.500182] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4163385 ] 00:08:58.495 EAL: No free 2048 kB hugepages reported on node 1 00:08:58.495 [2024-06-07 22:59:50.623005] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.495 [2024-06-07 22:59:50.708495] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.755 INFO: Running with entropic power schedule (0xFF, 100). 00:08:58.755 INFO: Seed: 2436978920 00:08:58.755 INFO: Loaded 1 modules (354788 inline 8-bit counters): 354788 [0x29647cc, 0x29bb1b0), 00:08:58.755 INFO: Loaded 1 PC tables (354788 PCs): 354788 [0x29bb1b0,0x2f24ff0), 00:08:58.755 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:08:58.755 INFO: A corpus is not provided, starting from an empty corpus 00:08:58.755 #2 INITED exec/s: 0 rss: 65Mb 00:08:58.755 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:58.755 This may also happen if the target rejected all inputs we tried so far 00:08:58.755 [2024-06-07 22:59:50.965810] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-5/domain/2: enabling controller 00:08:58.755 [2024-06-07 22:59:51.026641] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:58.755 [2024-06-07 22:59:51.026685] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:59.579 NEW_FUNC[1/647]: 0x485180 in fuzz_vfio_user_irq_set /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:171 00:08:59.579 NEW_FUNC[2/647]: 0x4883b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:59.579 #22 NEW cov: 10889 ft: 10875 corp: 2/14b lim: 13 exec/s: 0 rss: 70Mb L: 13/13 MS: 5 InsertByte-InsertRepeatedBytes-CrossOver-InsertRepeatedBytes-CopyPart- 00:08:59.579 [2024-06-07 22:59:51.657172] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:59.579 [2024-06-07 22:59:51.657224] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:59.579 NEW_FUNC[1/2]: 0xf39e30 in spdk_ring_dequeue /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/env.c:416 00:08:59.579 NEW_FUNC[2/2]: 0x1a40550 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:601 00:08:59.580 #33 NEW cov: 10944 ft: 13807 corp: 3/27b lim: 13 exec/s: 0 rss: 72Mb L: 13/13 MS: 1 ChangeByte- 00:08:59.838 [2024-06-07 22:59:51.898822] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:59.839 [2024-06-07 22:59:51.898860] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:59.839 #34 NEW cov: 10944 ft: 14100 corp: 4/40b lim: 13 exec/s: 34 rss: 73Mb L: 13/13 MS: 1 CopyPart- 00:09:00.097 [2024-06-07 22:59:52.129194] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:00.097 [2024-06-07 22:59:52.129232] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:00.097 #35 NEW cov: 10944 ft: 14853 corp: 5/53b lim: 13 exec/s: 35 rss: 73Mb L: 13/13 MS: 1 ChangeByte- 00:09:00.097 [2024-06-07 22:59:52.358617] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:00.097 [2024-06-07 22:59:52.358654] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:00.356 #41 NEW cov: 10944 ft: 14930 corp: 6/66b lim: 13 exec/s: 41 rss: 73Mb L: 13/13 MS: 1 ShuffleBytes- 00:09:00.356 [2024-06-07 22:59:52.588353] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:00.356 [2024-06-07 22:59:52.588391] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:00.615 #57 NEW cov: 10951 ft: 15137 corp: 7/79b lim: 13 exec/s: 57 rss: 73Mb L: 13/13 MS: 1 CrossOver- 00:09:00.615 [2024-06-07 22:59:52.818471] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:00.615 [2024-06-07 22:59:52.818512] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:00.874 #58 NEW cov: 10951 ft: 15237 corp: 8/92b lim: 13 exec/s: 29 rss: 73Mb L: 13/13 MS: 1 ShuffleBytes- 00:09:00.874 #58 DONE cov: 10951 ft: 15237 corp: 8/92b lim: 13 exec/s: 29 rss: 73Mb 00:09:00.874 Done 58 runs in 2 second(s) 00:09:00.874 [2024-06-07 22:59:52.969804] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-5/domain/2: disabling controller 00:09:01.132 22:59:53 llvm_fuzz.vfio_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-5 /var/tmp/suppress_vfio_fuzz 00:09:01.132 22:59:53 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:01.132 22:59:53 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:01.132 22:59:53 llvm_fuzz.vfio_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:09:01.132 22:59:53 llvm_fuzz.vfio_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=6 00:09:01.132 22:59:53 llvm_fuzz.vfio_fuzz -- vfio/run.sh@23 -- # local timen=1 00:09:01.132 22:59:53 llvm_fuzz.vfio_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:09:01.132 22:59:53 llvm_fuzz.vfio_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:09:01.132 22:59:53 llvm_fuzz.vfio_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-6 00:09:01.132 22:59:53 llvm_fuzz.vfio_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-6/domain/1 00:09:01.132 22:59:53 llvm_fuzz.vfio_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-6/domain/2 00:09:01.132 22:59:53 llvm_fuzz.vfio_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-6/fuzz_vfio_json.conf 00:09:01.132 22:59:53 llvm_fuzz.vfio_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:09:01.132 22:59:53 llvm_fuzz.vfio_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:09:01.132 22:59:53 llvm_fuzz.vfio_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-6 /tmp/vfio-user-6/domain/1 /tmp/vfio-user-6/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:09:01.132 22:59:53 llvm_fuzz.vfio_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-6/domain/1%; 00:09:01.132 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-6/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:09:01.132 22:59:53 llvm_fuzz.vfio_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:01.132 22:59:53 llvm_fuzz.vfio_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:09:01.132 22:59:53 llvm_fuzz.vfio_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-6/domain/1 -c /tmp/vfio-user-6/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 -Y /tmp/vfio-user-6/domain/2 -r /tmp/vfio-user-6/spdk6.sock -Z 6 00:09:01.132 [2024-06-07 22:59:53.294498] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:09:01.132 [2024-06-07 22:59:53.294618] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4163914 ] 00:09:01.132 EAL: No free 2048 kB hugepages reported on node 1 00:09:01.391 [2024-06-07 22:59:53.418059] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.391 [2024-06-07 22:59:53.504100] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.649 INFO: Running with entropic power schedule (0xFF, 100). 00:09:01.649 INFO: Seed: 939023144 00:09:01.649 INFO: Loaded 1 modules (354788 inline 8-bit counters): 354788 [0x29647cc, 0x29bb1b0), 00:09:01.649 INFO: Loaded 1 PC tables (354788 PCs): 354788 [0x29bb1b0,0x2f24ff0), 00:09:01.649 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:09:01.649 INFO: A corpus is not provided, starting from an empty corpus 00:09:01.649 #2 INITED exec/s: 0 rss: 65Mb 00:09:01.649 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:01.649 This may also happen if the target rejected all inputs we tried so far 00:09:01.649 [2024-06-07 22:59:53.761424] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-6/domain/2: enabling controller 00:09:01.649 [2024-06-07 22:59:53.820622] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:01.649 [2024-06-07 22:59:53.820663] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:02.166 NEW_FUNC[1/648]: 0x485e70 in fuzz_vfio_user_set_msix /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:190 00:09:02.166 NEW_FUNC[2/648]: 0x4883b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:09:02.166 #8 NEW cov: 10905 ft: 10867 corp: 2/10b lim: 9 exec/s: 0 rss: 70Mb L: 9/9 MS: 1 InsertRepeatedBytes- 00:09:02.425 [2024-06-07 22:59:54.451859] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:02.425 [2024-06-07 22:59:54.451908] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:02.425 NEW_FUNC[1/1]: 0x1a40550 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:601 00:09:02.425 #11 NEW cov: 10936 ft: 13692 corp: 3/19b lim: 9 exec/s: 0 rss: 72Mb L: 9/9 MS: 3 CrossOver-InsertByte-CrossOver- 00:09:02.425 [2024-06-07 22:59:54.673293] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:02.425 [2024-06-07 22:59:54.673333] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:02.683 #17 NEW cov: 10936 ft: 13879 corp: 4/28b lim: 9 exec/s: 17 rss: 73Mb L: 9/9 MS: 1 ChangeBinInt- 00:09:02.683 [2024-06-07 22:59:54.894175] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:02.683 [2024-06-07 22:59:54.894214] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:02.942 #18 NEW cov: 10936 ft: 14015 corp: 5/37b lim: 9 exec/s: 18 rss: 73Mb L: 9/9 MS: 1 ShuffleBytes- 00:09:02.942 [2024-06-07 22:59:55.105070] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:02.942 [2024-06-07 22:59:55.105108] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:03.200 #27 NEW cov: 10936 ft: 14337 corp: 6/46b lim: 9 exec/s: 27 rss: 73Mb L: 9/9 MS: 4 EraseBytes-ChangeByte-EraseBytes-CopyPart- 00:09:03.200 [2024-06-07 22:59:55.325702] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:03.200 [2024-06-07 22:59:55.325741] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:03.200 #28 NEW cov: 10936 ft: 14449 corp: 7/55b lim: 9 exec/s: 28 rss: 73Mb L: 9/9 MS: 1 ShuffleBytes- 00:09:03.459 [2024-06-07 22:59:55.536029] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:03.459 [2024-06-07 22:59:55.536066] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:03.459 #29 NEW cov: 10943 ft: 14650 corp: 8/64b lim: 9 exec/s: 29 rss: 73Mb L: 9/9 MS: 1 ChangeBit- 00:09:03.718 [2024-06-07 22:59:55.755722] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:03.718 [2024-06-07 22:59:55.755760] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:03.718 #30 NEW cov: 10943 ft: 14680 corp: 9/73b lim: 9 exec/s: 15 rss: 73Mb L: 9/9 MS: 1 ChangeByte- 00:09:03.718 #30 DONE cov: 10943 ft: 14680 corp: 9/73b lim: 9 exec/s: 15 rss: 73Mb 00:09:03.718 Done 30 runs in 2 second(s) 00:09:03.718 [2024-06-07 22:59:55.905809] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-6/domain/2: disabling controller 00:09:03.977 22:59:56 llvm_fuzz.vfio_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-6 /var/tmp/suppress_vfio_fuzz 00:09:03.977 22:59:56 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:03.977 22:59:56 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:03.977 22:59:56 llvm_fuzz.vfio_fuzz -- vfio/run.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:09:03.977 00:09:03.977 real 0m20.147s 00:09:03.977 user 0m27.526s 00:09:03.977 sys 0m2.192s 00:09:03.977 22:59:56 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:03.977 22:59:56 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@10 -- # set +x 00:09:03.977 ************************************ 00:09:03.977 END TEST vfio_fuzz 00:09:03.977 ************************************ 00:09:03.977 22:59:56 llvm_fuzz -- fuzz/llvm.sh@67 -- # [[ 1 -eq 0 ]] 00:09:03.977 00:09:03.977 real 1m28.220s 00:09:03.977 user 2m7.056s 00:09:03.977 sys 0m11.755s 00:09:03.977 22:59:56 llvm_fuzz -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:03.977 22:59:56 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:09:03.977 ************************************ 00:09:03.977 END TEST llvm_fuzz 00:09:03.977 ************************************ 00:09:04.236 22:59:56 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:09:04.236 22:59:56 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:09:04.236 22:59:56 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:09:04.236 22:59:56 -- common/autotest_common.sh@723 -- # xtrace_disable 00:09:04.236 22:59:56 -- common/autotest_common.sh@10 -- # set +x 00:09:04.236 22:59:56 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:09:04.236 22:59:56 -- common/autotest_common.sh@1391 -- # local autotest_es=0 00:09:04.236 22:59:56 -- common/autotest_common.sh@1392 -- # xtrace_disable 00:09:04.236 22:59:56 -- common/autotest_common.sh@10 -- # set +x 00:09:10.808 INFO: APP EXITING 00:09:10.808 INFO: killing all VMs 00:09:10.808 INFO: killing vhost app 00:09:10.808 WARN: no vhost pid file found 00:09:10.808 INFO: EXIT DONE 00:09:14.137 Waiting for block devices as requested 00:09:14.137 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:09:14.137 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:09:14.137 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:09:14.395 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:09:14.395 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:09:14.395 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:09:14.653 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:09:14.653 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:09:14.911 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:09:14.911 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:09:14.911 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:09:15.169 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:09:15.169 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:09:15.169 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:09:15.428 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:09:15.428 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:09:15.428 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:09:19.620 Cleaning 00:09:19.620 Removing: /dev/shm/spdk_tgt_trace.pid4126409 00:09:19.620 Removing: /var/run/dpdk/spdk_pid4123955 00:09:19.620 Removing: /var/run/dpdk/spdk_pid4125209 00:09:19.620 Removing: /var/run/dpdk/spdk_pid4126409 00:09:19.620 Removing: /var/run/dpdk/spdk_pid4127116 00:09:19.620 Removing: /var/run/dpdk/spdk_pid4128177 00:09:19.620 Removing: /var/run/dpdk/spdk_pid4128425 00:09:19.620 Removing: /var/run/dpdk/spdk_pid4129333 00:09:19.620 Removing: /var/run/dpdk/spdk_pid4129600 00:09:19.620 Removing: /var/run/dpdk/spdk_pid4130010 00:09:19.620 Removing: /var/run/dpdk/spdk_pid4130333 00:09:19.620 Removing: /var/run/dpdk/spdk_pid4130657 00:09:19.620 Removing: /var/run/dpdk/spdk_pid4130998 00:09:19.620 Removing: /var/run/dpdk/spdk_pid4131322 00:09:19.620 Removing: /var/run/dpdk/spdk_pid4131650 00:09:19.620 Removing: /var/run/dpdk/spdk_pid4132022 00:09:19.620 Removing: /var/run/dpdk/spdk_pid4132330 00:09:19.620 Removing: /var/run/dpdk/spdk_pid4133611 00:09:19.620 Removing: /var/run/dpdk/spdk_pid4136808 00:09:19.620 Removing: /var/run/dpdk/spdk_pid4137300 00:09:19.620 Removing: /var/run/dpdk/spdk_pid4137632 00:09:19.620 Removing: /var/run/dpdk/spdk_pid4137782 00:09:19.620 Removing: /var/run/dpdk/spdk_pid4138435 00:09:19.620 Removing: /var/run/dpdk/spdk_pid4138482 00:09:19.620 Removing: /var/run/dpdk/spdk_pid4139087 00:09:19.620 Removing: /var/run/dpdk/spdk_pid4139318 00:09:19.620 Removing: /var/run/dpdk/spdk_pid4139615 00:09:19.620 Removing: /var/run/dpdk/spdk_pid4139881 00:09:19.620 Removing: /var/run/dpdk/spdk_pid4140152 00:09:19.620 Removing: /var/run/dpdk/spdk_pid4140193 00:09:19.620 Removing: /var/run/dpdk/spdk_pid4140823 00:09:19.620 Removing: /var/run/dpdk/spdk_pid4141104 00:09:19.620 Removing: /var/run/dpdk/spdk_pid4141377 00:09:19.620 Removing: /var/run/dpdk/spdk_pid4141466 00:09:19.620 Removing: /var/run/dpdk/spdk_pid4141767 00:09:19.620 Removing: /var/run/dpdk/spdk_pid4142005 00:09:19.620 Removing: /var/run/dpdk/spdk_pid4142104 00:09:19.620 Removing: /var/run/dpdk/spdk_pid4142398 00:09:19.620 Removing: /var/run/dpdk/spdk_pid4142679 00:09:19.620 Removing: /var/run/dpdk/spdk_pid4142964 00:09:19.620 Removing: /var/run/dpdk/spdk_pid4143263 00:09:19.620 Removing: /var/run/dpdk/spdk_pid4143544 00:09:19.620 Removing: /var/run/dpdk/spdk_pid4143831 00:09:19.620 Removing: /var/run/dpdk/spdk_pid4144110 00:09:19.620 Removing: /var/run/dpdk/spdk_pid4144402 00:09:19.620 Removing: /var/run/dpdk/spdk_pid4144681 00:09:19.620 Removing: /var/run/dpdk/spdk_pid4144968 00:09:19.620 Removing: /var/run/dpdk/spdk_pid4145223 00:09:19.620 Removing: /var/run/dpdk/spdk_pid4145476 00:09:19.620 Removing: /var/run/dpdk/spdk_pid4145729 00:09:19.620 Removing: /var/run/dpdk/spdk_pid4145999 00:09:19.620 Removing: /var/run/dpdk/spdk_pid4146253 00:09:19.620 Removing: /var/run/dpdk/spdk_pid4146521 00:09:19.620 Removing: /var/run/dpdk/spdk_pid4146783 00:09:19.620 Removing: /var/run/dpdk/spdk_pid4147057 00:09:19.620 Removing: /var/run/dpdk/spdk_pid4147313 00:09:19.620 Removing: /var/run/dpdk/spdk_pid4147591 00:09:19.620 Removing: /var/run/dpdk/spdk_pid4147900 00:09:19.620 Removing: /var/run/dpdk/spdk_pid4148248 00:09:19.620 Removing: /var/run/dpdk/spdk_pid4148972 00:09:19.620 Removing: /var/run/dpdk/spdk_pid4149384 00:09:19.620 Removing: /var/run/dpdk/spdk_pid4149801 00:09:19.620 Removing: /var/run/dpdk/spdk_pid4150336 00:09:19.620 Removing: /var/run/dpdk/spdk_pid4150867 00:09:19.620 Removing: /var/run/dpdk/spdk_pid4151365 00:09:19.620 Removing: /var/run/dpdk/spdk_pid4151712 00:09:19.620 Removing: /var/run/dpdk/spdk_pid4152231 00:09:19.620 Removing: /var/run/dpdk/spdk_pid4152766 00:09:19.620 Removing: /var/run/dpdk/spdk_pid4153301 00:09:19.620 Removing: /var/run/dpdk/spdk_pid4153699 00:09:19.620 Removing: /var/run/dpdk/spdk_pid4154131 00:09:19.620 Removing: /var/run/dpdk/spdk_pid4154668 00:09:19.620 Removing: /var/run/dpdk/spdk_pid4155198 00:09:19.620 Removing: /var/run/dpdk/spdk_pid4155554 00:09:19.620 Removing: /var/run/dpdk/spdk_pid4156025 00:09:19.620 Removing: /var/run/dpdk/spdk_pid4156555 00:09:19.620 Removing: /var/run/dpdk/spdk_pid4157064 00:09:19.879 Removing: /var/run/dpdk/spdk_pid4157387 00:09:19.879 Removing: /var/run/dpdk/spdk_pid4157918 00:09:19.879 Removing: /var/run/dpdk/spdk_pid4158447 00:09:19.879 Removing: /var/run/dpdk/spdk_pid4158914 00:09:19.879 Removing: /var/run/dpdk/spdk_pid4159273 00:09:19.879 Removing: /var/run/dpdk/spdk_pid4159802 00:09:19.879 Removing: /var/run/dpdk/spdk_pid4160336 00:09:19.879 Removing: /var/run/dpdk/spdk_pid4160945 00:09:19.879 Removing: /var/run/dpdk/spdk_pid4161397 00:09:19.879 Removing: /var/run/dpdk/spdk_pid4161815 00:09:19.879 Removing: /var/run/dpdk/spdk_pid4162312 00:09:19.879 Removing: /var/run/dpdk/spdk_pid4162849 00:09:19.879 Removing: /var/run/dpdk/spdk_pid4163385 00:09:19.879 Removing: /var/run/dpdk/spdk_pid4163914 00:09:19.879 Clean 00:09:19.879 23:00:12 -- common/autotest_common.sh@1450 -- # return 0 00:09:19.879 23:00:12 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:09:19.879 23:00:12 -- common/autotest_common.sh@729 -- # xtrace_disable 00:09:19.879 23:00:12 -- common/autotest_common.sh@10 -- # set +x 00:09:19.879 23:00:12 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:09:19.879 23:00:12 -- common/autotest_common.sh@729 -- # xtrace_disable 00:09:19.879 23:00:12 -- common/autotest_common.sh@10 -- # set +x 00:09:19.879 23:00:12 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:09:19.879 23:00:12 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log ]] 00:09:19.879 23:00:12 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log 00:09:19.879 23:00:12 -- spdk/autotest.sh@391 -- # hash lcov 00:09:19.879 23:00:12 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=clang == *\c\l\a\n\g* ]] 00:09:20.139 23:00:12 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:09:20.139 23:00:12 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:09:20.139 23:00:12 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:20.139 23:00:12 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:20.139 23:00:12 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.139 23:00:12 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.139 23:00:12 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.139 23:00:12 -- paths/export.sh@5 -- $ export PATH 00:09:20.139 23:00:12 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.139 23:00:12 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:09:20.139 23:00:12 -- common/autobuild_common.sh@437 -- $ date +%s 00:09:20.139 23:00:12 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1717794012.XXXXXX 00:09:20.139 23:00:12 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1717794012.dYfZSo 00:09:20.139 23:00:12 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:09:20.139 23:00:12 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:09:20.139 23:00:12 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/' 00:09:20.139 23:00:12 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp' 00:09:20.139 23:00:12 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:09:20.139 23:00:12 -- common/autobuild_common.sh@453 -- $ get_config_params 00:09:20.139 23:00:12 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:09:20.139 23:00:12 -- common/autotest_common.sh@10 -- $ set +x 00:09:20.139 23:00:12 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:09:20.139 23:00:12 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:09:20.139 23:00:12 -- pm/common@17 -- $ local monitor 00:09:20.139 23:00:12 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:09:20.139 23:00:12 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:09:20.139 23:00:12 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:09:20.139 23:00:12 -- pm/common@21 -- $ date +%s 00:09:20.139 23:00:12 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:09:20.139 23:00:12 -- pm/common@21 -- $ date +%s 00:09:20.139 23:00:12 -- pm/common@25 -- $ sleep 1 00:09:20.139 23:00:12 -- pm/common@21 -- $ date +%s 00:09:20.139 23:00:12 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1717794012 00:09:20.139 23:00:12 -- pm/common@21 -- $ date +%s 00:09:20.139 23:00:12 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1717794012 00:09:20.139 23:00:12 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1717794012 00:09:20.139 23:00:12 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1717794012 00:09:20.139 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1717794012_collect-vmstat.pm.log 00:09:20.139 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1717794012_collect-cpu-load.pm.log 00:09:20.139 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1717794012_collect-cpu-temp.pm.log 00:09:20.139 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1717794012_collect-bmc-pm.bmc.pm.log 00:09:21.077 23:00:13 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:09:21.077 23:00:13 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j112 00:09:21.077 23:00:13 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:09:21.077 23:00:13 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:09:21.077 23:00:13 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:09:21.077 23:00:13 -- spdk/autopackage.sh@19 -- $ timing_finish 00:09:21.077 23:00:13 -- common/autotest_common.sh@735 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:09:21.077 23:00:13 -- common/autotest_common.sh@736 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:09:21.077 23:00:13 -- common/autotest_common.sh@738 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:09:21.077 23:00:13 -- spdk/autopackage.sh@20 -- $ exit 0 00:09:21.077 23:00:13 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:09:21.077 23:00:13 -- pm/common@29 -- $ signal_monitor_resources TERM 00:09:21.077 23:00:13 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:09:21.077 23:00:13 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:09:21.077 23:00:13 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:09:21.077 23:00:13 -- pm/common@44 -- $ pid=4172114 00:09:21.077 23:00:13 -- pm/common@50 -- $ kill -TERM 4172114 00:09:21.077 23:00:13 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:09:21.077 23:00:13 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:09:21.077 23:00:13 -- pm/common@44 -- $ pid=4172116 00:09:21.077 23:00:13 -- pm/common@50 -- $ kill -TERM 4172116 00:09:21.077 23:00:13 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:09:21.077 23:00:13 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:09:21.077 23:00:13 -- pm/common@44 -- $ pid=4172119 00:09:21.077 23:00:13 -- pm/common@50 -- $ kill -TERM 4172119 00:09:21.077 23:00:13 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:09:21.077 23:00:13 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:09:21.077 23:00:13 -- pm/common@44 -- $ pid=4172154 00:09:21.077 23:00:13 -- pm/common@50 -- $ sudo -E kill -TERM 4172154 00:09:21.336 + [[ -n 4008585 ]] 00:09:21.336 + sudo kill 4008585 00:09:21.346 [Pipeline] } 00:09:21.363 [Pipeline] // stage 00:09:21.367 [Pipeline] } 00:09:21.382 [Pipeline] // timeout 00:09:21.387 [Pipeline] } 00:09:21.400 [Pipeline] // catchError 00:09:21.405 [Pipeline] } 00:09:21.420 [Pipeline] // wrap 00:09:21.427 [Pipeline] } 00:09:21.446 [Pipeline] // catchError 00:09:21.454 [Pipeline] stage 00:09:21.455 [Pipeline] { (Epilogue) 00:09:21.466 [Pipeline] catchError 00:09:21.467 [Pipeline] { 00:09:21.477 [Pipeline] echo 00:09:21.478 Cleanup processes 00:09:21.482 [Pipeline] sh 00:09:21.764 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:09:21.764 4071975 sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1717793638 00:09:21.764 4071995 bash /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1717793638 00:09:21.764 4172279 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/sdr.cache 00:09:21.764 4173124 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:09:21.782 [Pipeline] sh 00:09:22.099 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:09:22.099 ++ grep -v 'sudo pgrep' 00:09:22.099 ++ awk '{print $1}' 00:09:22.099 + sudo kill -9 4071975 4071995 4172279 00:09:22.112 [Pipeline] sh 00:09:22.397 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:09:22.397 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,721 MiB 00:09:22.397 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,721 MiB 00:09:24.311 [Pipeline] sh 00:09:24.595 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:09:24.595 Artifacts sizes are good 00:09:24.612 [Pipeline] archiveArtifacts 00:09:24.620 Archiving artifacts 00:09:24.696 [Pipeline] sh 00:09:24.988 + sudo chown -R sys_sgci /var/jenkins/workspace/short-fuzz-phy-autotest 00:09:25.003 [Pipeline] cleanWs 00:09:25.014 [WS-CLEANUP] Deleting project workspace... 00:09:25.014 [WS-CLEANUP] Deferred wipeout is used... 00:09:25.022 [WS-CLEANUP] done 00:09:25.024 [Pipeline] } 00:09:25.043 [Pipeline] // catchError 00:09:25.056 [Pipeline] sh 00:09:25.375 + logger -p user.info -t JENKINS-CI 00:09:25.384 [Pipeline] } 00:09:25.400 [Pipeline] // stage 00:09:25.406 [Pipeline] } 00:09:25.425 [Pipeline] // node 00:09:25.432 [Pipeline] End of Pipeline 00:09:25.472 Finished: SUCCESS