00:00:00.000 Started by upstream project "autotest-per-patch" build number 130929 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.016 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.016 The recommended git tool is: git 00:00:00.017 using credential 00000000-0000-0000-0000-000000000002 00:00:00.018 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.036 Fetching changes from the remote Git repository 00:00:00.037 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.065 Using shallow fetch with depth 1 00:00:00.065 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.065 > git --version # timeout=10 00:00:00.108 > git --version # 'git version 2.39.2' 00:00:00.108 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.155 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.156 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.926 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.937 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.947 Checking out Revision bc56972291bf21b4d2a602b495a165146a8d67a1 (FETCH_HEAD) 00:00:02.948 > git config core.sparsecheckout # timeout=10 00:00:02.957 > git read-tree -mu HEAD # timeout=10 00:00:02.972 > git checkout -f bc56972291bf21b4d2a602b495a165146a8d67a1 # timeout=5 00:00:02.987 Commit message: "jenkins/jjb-config: Remove extendedChoice from ipxe-test-images" 00:00:02.987 > git rev-list --no-walk bc56972291bf21b4d2a602b495a165146a8d67a1 # timeout=10 00:00:03.093 [Pipeline] Start of Pipeline 00:00:03.105 [Pipeline] library 00:00:03.106 Loading library shm_lib@master 00:00:03.107 Library shm_lib@master is cached. Copying from home. 00:00:03.124 [Pipeline] node 00:00:03.139 Running on WFP39 in /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:03.140 [Pipeline] { 00:00:03.150 [Pipeline] catchError 00:00:03.151 [Pipeline] { 00:00:03.161 [Pipeline] wrap 00:00:03.167 [Pipeline] { 00:00:03.172 [Pipeline] stage 00:00:03.174 [Pipeline] { (Prologue) 00:00:03.351 [Pipeline] sh 00:00:03.635 + logger -p user.info -t JENKINS-CI 00:00:03.705 [Pipeline] echo 00:00:03.711 Node: WFP39 00:00:03.720 [Pipeline] sh 00:00:04.015 [Pipeline] setCustomBuildProperty 00:00:04.026 [Pipeline] echo 00:00:04.027 Cleanup processes 00:00:04.031 [Pipeline] sh 00:00:04.317 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:04.317 3762008 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:04.330 [Pipeline] sh 00:00:04.616 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:04.616 ++ grep -v 'sudo pgrep' 00:00:04.616 ++ awk '{print $1}' 00:00:04.616 + sudo kill -9 00:00:04.616 + true 00:00:04.629 [Pipeline] cleanWs 00:00:04.640 [WS-CLEANUP] Deleting project workspace... 00:00:04.640 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.646 [WS-CLEANUP] done 00:00:04.651 [Pipeline] setCustomBuildProperty 00:00:04.664 [Pipeline] sh 00:00:04.948 + sudo git config --global --replace-all safe.directory '*' 00:00:05.081 [Pipeline] httpRequest 00:00:05.870 [Pipeline] echo 00:00:05.871 Sorcerer 10.211.164.101 is alive 00:00:05.877 [Pipeline] retry 00:00:05.878 [Pipeline] { 00:00:05.891 [Pipeline] httpRequest 00:00:05.895 HttpMethod: GET 00:00:05.895 URL: http://10.211.164.101/packages/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:05.896 Sending request to url: http://10.211.164.101/packages/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:05.899 Response Code: HTTP/1.1 200 OK 00:00:05.899 Success: Status code 200 is in the accepted range: 200,404 00:00:05.899 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:06.456 [Pipeline] } 00:00:06.471 [Pipeline] // retry 00:00:06.477 [Pipeline] sh 00:00:06.761 + tar --no-same-owner -xf jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:06.776 [Pipeline] httpRequest 00:00:07.349 [Pipeline] echo 00:00:07.350 Sorcerer 10.211.164.101 is alive 00:00:07.358 [Pipeline] retry 00:00:07.359 [Pipeline] { 00:00:07.371 [Pipeline] httpRequest 00:00:07.376 HttpMethod: GET 00:00:07.376 URL: http://10.211.164.101/packages/spdk_6101e4048d5400f2ba64e4378da28dc592756098.tar.gz 00:00:07.377 Sending request to url: http://10.211.164.101/packages/spdk_6101e4048d5400f2ba64e4378da28dc592756098.tar.gz 00:00:07.389 Response Code: HTTP/1.1 200 OK 00:00:07.390 Success: Status code 200 is in the accepted range: 200,404 00:00:07.390 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk_6101e4048d5400f2ba64e4378da28dc592756098.tar.gz 00:00:59.073 [Pipeline] } 00:00:59.090 [Pipeline] // retry 00:00:59.098 [Pipeline] sh 00:00:59.386 + tar --no-same-owner -xf spdk_6101e4048d5400f2ba64e4378da28dc592756098.tar.gz 00:01:01.937 [Pipeline] sh 00:01:02.224 + git -C spdk log --oneline -n5 00:01:02.224 6101e4048 vhost: defer the g_fini_cb after called 00:01:02.224 92108e0a2 fsdev/aio: add support for null IOs 00:01:02.224 dcdab59d3 lib/reduce: Check return code of read superblock 00:01:02.224 95d9d27f7 bdev/nvme: controller failover/multipath doc change 00:01:02.224 f366dac4a bdev/nvme: removed 'multipath' param from spdk_bdev_nvme_create() 00:01:02.235 [Pipeline] } 00:01:02.250 [Pipeline] // stage 00:01:02.259 [Pipeline] stage 00:01:02.261 [Pipeline] { (Prepare) 00:01:02.277 [Pipeline] writeFile 00:01:02.293 [Pipeline] sh 00:01:02.631 + logger -p user.info -t JENKINS-CI 00:01:02.653 [Pipeline] sh 00:01:02.939 + logger -p user.info -t JENKINS-CI 00:01:02.952 [Pipeline] sh 00:01:03.244 + cat autorun-spdk.conf 00:01:03.244 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:03.244 SPDK_TEST_FUZZER_SHORT=1 00:01:03.244 SPDK_TEST_FUZZER=1 00:01:03.244 SPDK_TEST_SETUP=1 00:01:03.244 SPDK_RUN_UBSAN=1 00:01:03.257 RUN_NIGHTLY=0 00:01:03.289 [Pipeline] readFile 00:01:03.311 [Pipeline] withEnv 00:01:03.313 [Pipeline] { 00:01:03.320 [Pipeline] sh 00:01:03.601 + set -ex 00:01:03.601 + [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf ]] 00:01:03.601 + source /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:01:03.601 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:03.601 ++ SPDK_TEST_FUZZER_SHORT=1 00:01:03.601 ++ SPDK_TEST_FUZZER=1 00:01:03.601 ++ SPDK_TEST_SETUP=1 00:01:03.601 ++ SPDK_RUN_UBSAN=1 00:01:03.601 ++ RUN_NIGHTLY=0 00:01:03.601 + case $SPDK_TEST_NVMF_NICS in 00:01:03.601 + DRIVERS= 00:01:03.601 + [[ -n '' ]] 00:01:03.601 + exit 0 00:01:03.611 [Pipeline] } 00:01:03.625 [Pipeline] // withEnv 00:01:03.631 [Pipeline] } 00:01:03.646 [Pipeline] // stage 00:01:03.655 [Pipeline] catchError 00:01:03.657 [Pipeline] { 00:01:03.671 [Pipeline] timeout 00:01:03.671 Timeout set to expire in 30 min 00:01:03.673 [Pipeline] { 00:01:03.687 [Pipeline] stage 00:01:03.689 [Pipeline] { (Tests) 00:01:03.703 [Pipeline] sh 00:01:04.000 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/short-fuzz-phy-autotest 00:01:04.000 ++ readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest 00:01:04.000 + DIR_ROOT=/var/jenkins/workspace/short-fuzz-phy-autotest 00:01:04.000 + [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest ]] 00:01:04.000 + DIR_SPDK=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:01:04.000 + DIR_OUTPUT=/var/jenkins/workspace/short-fuzz-phy-autotest/output 00:01:04.000 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk ]] 00:01:04.000 + [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:01:04.000 + mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/output 00:01:04.000 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:01:04.000 + [[ short-fuzz-phy-autotest == pkgdep-* ]] 00:01:04.000 + cd /var/jenkins/workspace/short-fuzz-phy-autotest 00:01:04.000 + source /etc/os-release 00:01:04.000 ++ NAME='Fedora Linux' 00:01:04.001 ++ VERSION='39 (Cloud Edition)' 00:01:04.001 ++ ID=fedora 00:01:04.001 ++ VERSION_ID=39 00:01:04.001 ++ VERSION_CODENAME= 00:01:04.001 ++ PLATFORM_ID=platform:f39 00:01:04.001 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:04.001 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:04.001 ++ LOGO=fedora-logo-icon 00:01:04.001 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:04.001 ++ HOME_URL=https://fedoraproject.org/ 00:01:04.001 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:04.001 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:04.001 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:04.001 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:04.001 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:04.001 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:04.001 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:04.001 ++ SUPPORT_END=2024-11-12 00:01:04.001 ++ VARIANT='Cloud Edition' 00:01:04.001 ++ VARIANT_ID=cloud 00:01:04.001 + uname -a 00:01:04.001 Linux spdk-wfp-39 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 05:41:37 UTC 2024 x86_64 GNU/Linux 00:01:04.001 + sudo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:01:06.549 Hugepages 00:01:06.549 node hugesize free / total 00:01:06.549 node0 1048576kB 0 / 0 00:01:06.549 node0 2048kB 0 / 0 00:01:06.549 node1 1048576kB 0 / 0 00:01:06.549 node1 2048kB 0 / 0 00:01:06.549 00:01:06.549 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:06.549 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:06.549 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:06.549 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:06.549 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:06.549 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:06.549 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:06.549 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:06.549 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:06.549 NVMe 0000:1a:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:01:06.549 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:06.549 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:06.549 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:06.549 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:06.549 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:06.549 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:06.549 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:06.549 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:06.549 + rm -f /tmp/spdk-ld-path 00:01:06.549 + source autorun-spdk.conf 00:01:06.549 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:06.549 ++ SPDK_TEST_FUZZER_SHORT=1 00:01:06.549 ++ SPDK_TEST_FUZZER=1 00:01:06.549 ++ SPDK_TEST_SETUP=1 00:01:06.549 ++ SPDK_RUN_UBSAN=1 00:01:06.549 ++ RUN_NIGHTLY=0 00:01:06.549 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:06.549 + [[ -n '' ]] 00:01:06.549 + sudo git config --global --add safe.directory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:01:06.549 + for M in /var/spdk/build-*-manifest.txt 00:01:06.549 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:06.549 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:01:06.549 + for M in /var/spdk/build-*-manifest.txt 00:01:06.549 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:06.549 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:01:06.549 + for M in /var/spdk/build-*-manifest.txt 00:01:06.549 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:06.549 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:01:06.549 ++ uname 00:01:06.549 + [[ Linux == \L\i\n\u\x ]] 00:01:06.549 + sudo dmesg -T 00:01:06.808 + sudo dmesg --clear 00:01:06.808 + dmesg_pid=3762952 00:01:06.808 + [[ Fedora Linux == FreeBSD ]] 00:01:06.809 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:06.809 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:06.809 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:06.809 + [[ -x /usr/src/fio-static/fio ]] 00:01:06.809 + sudo dmesg -Tw 00:01:06.809 + export FIO_BIN=/usr/src/fio-static/fio 00:01:06.809 + FIO_BIN=/usr/src/fio-static/fio 00:01:06.809 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\s\h\o\r\t\-\f\u\z\z\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:06.809 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:06.809 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:06.809 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:06.809 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:06.809 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:06.809 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:06.809 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:06.809 + spdk/autorun.sh /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:01:06.809 Test configuration: 00:01:06.809 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:06.809 SPDK_TEST_FUZZER_SHORT=1 00:01:06.809 SPDK_TEST_FUZZER=1 00:01:06.809 SPDK_TEST_SETUP=1 00:01:06.809 SPDK_RUN_UBSAN=1 00:01:06.809 RUN_NIGHTLY=0 00:08:37 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:01:06.809 00:08:37 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:01:06.809 00:08:37 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:06.809 00:08:37 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:06.809 00:08:37 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:06.809 00:08:37 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:06.809 00:08:37 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:06.809 00:08:37 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:06.809 00:08:37 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:06.809 00:08:37 -- paths/export.sh@5 -- $ export PATH 00:01:06.809 00:08:37 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:06.809 00:08:37 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:01:06.809 00:08:37 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:06.809 00:08:37 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728425317.XXXXXX 00:01:06.809 00:08:37 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728425317.BnkZLv 00:01:06.809 00:08:37 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:06.809 00:08:37 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:01:06.809 00:08:37 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/' 00:01:06.809 00:08:37 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:06.809 00:08:37 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:06.809 00:08:37 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:06.809 00:08:37 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:01:06.809 00:08:37 -- common/autotest_common.sh@10 -- $ set +x 00:01:06.809 00:08:37 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:06.809 00:08:37 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:06.809 00:08:37 -- pm/common@17 -- $ local monitor 00:01:06.809 00:08:37 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:06.809 00:08:37 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:06.809 00:08:37 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:06.809 00:08:37 -- pm/common@21 -- $ date +%s 00:01:06.809 00:08:37 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:06.809 00:08:37 -- pm/common@21 -- $ date +%s 00:01:06.809 00:08:37 -- pm/common@25 -- $ sleep 1 00:01:06.809 00:08:37 -- pm/common@21 -- $ date +%s 00:01:06.809 00:08:37 -- pm/common@21 -- $ date +%s 00:01:06.809 00:08:37 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728425317 00:01:06.809 00:08:37 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728425317 00:01:06.809 00:08:37 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728425317 00:01:06.809 00:08:37 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728425317 00:01:07.068 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728425317_collect-vmstat.pm.log 00:01:07.068 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728425317_collect-cpu-load.pm.log 00:01:07.068 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728425317_collect-cpu-temp.pm.log 00:01:07.069 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728425317_collect-bmc-pm.bmc.pm.log 00:01:08.007 00:08:38 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:01:08.008 00:08:38 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:08.008 00:08:38 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:08.008 00:08:38 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:01:08.008 00:08:38 -- spdk/autobuild.sh@16 -- $ date -u 00:01:08.008 Tue Oct 8 10:08:38 PM UTC 2024 00:01:08.008 00:08:38 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:08.008 v25.01-pre-42-g6101e4048 00:01:08.008 00:08:38 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:08.008 00:08:38 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:08.008 00:08:38 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:08.008 00:08:38 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:08.008 00:08:38 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:08.008 00:08:38 -- common/autotest_common.sh@10 -- $ set +x 00:01:08.008 ************************************ 00:01:08.008 START TEST ubsan 00:01:08.008 ************************************ 00:01:08.008 00:08:38 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:01:08.008 using ubsan 00:01:08.008 00:01:08.008 real 0m0.001s 00:01:08.008 user 0m0.001s 00:01:08.008 sys 0m0.000s 00:01:08.008 00:08:38 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:08.008 00:08:38 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:08.008 ************************************ 00:01:08.008 END TEST ubsan 00:01:08.008 ************************************ 00:01:08.008 00:08:38 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:08.008 00:08:38 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:08.008 00:08:38 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:08.008 00:08:38 -- spdk/autobuild.sh@51 -- $ [[ 1 -eq 1 ]] 00:01:08.008 00:08:38 -- spdk/autobuild.sh@52 -- $ llvm_precompile 00:01:08.008 00:08:38 -- common/autobuild_common.sh@438 -- $ run_test autobuild_llvm_precompile _llvm_precompile 00:01:08.008 00:08:38 -- common/autotest_common.sh@1101 -- $ '[' 2 -le 1 ']' 00:01:08.008 00:08:38 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:08.008 00:08:38 -- common/autotest_common.sh@10 -- $ set +x 00:01:08.008 ************************************ 00:01:08.008 START TEST autobuild_llvm_precompile 00:01:08.008 ************************************ 00:01:08.008 00:08:38 autobuild_llvm_precompile -- common/autotest_common.sh@1125 -- $ _llvm_precompile 00:01:08.008 00:08:38 autobuild_llvm_precompile -- common/autobuild_common.sh@32 -- $ clang --version 00:01:08.008 00:08:38 autobuild_llvm_precompile -- common/autobuild_common.sh@32 -- $ [[ clang version 17.0.6 (Fedora 17.0.6-2.fc39) 00:01:08.008 Target: x86_64-redhat-linux-gnu 00:01:08.008 Thread model: posix 00:01:08.008 InstalledDir: /usr/bin =~ version (([0-9]+).([0-9]+).([0-9]+)) ]] 00:01:08.008 00:08:38 autobuild_llvm_precompile -- common/autobuild_common.sh@33 -- $ clang_num=17 00:01:08.008 00:08:38 autobuild_llvm_precompile -- common/autobuild_common.sh@35 -- $ export CC=clang-17 00:01:08.008 00:08:38 autobuild_llvm_precompile -- common/autobuild_common.sh@35 -- $ CC=clang-17 00:01:08.008 00:08:38 autobuild_llvm_precompile -- common/autobuild_common.sh@36 -- $ export CXX=clang++-17 00:01:08.008 00:08:38 autobuild_llvm_precompile -- common/autobuild_common.sh@36 -- $ CXX=clang++-17 00:01:08.008 00:08:38 autobuild_llvm_precompile -- common/autobuild_common.sh@38 -- $ fuzzer_libs=(/usr/lib*/clang/@("$clang_num"|"$clang_version")/lib/*linux*/libclang_rt.fuzzer_no_main?(-x86_64).a) 00:01:08.008 00:08:38 autobuild_llvm_precompile -- common/autobuild_common.sh@39 -- $ fuzzer_lib=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:01:08.008 00:08:38 autobuild_llvm_precompile -- common/autobuild_common.sh@40 -- $ [[ -e /usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a ]] 00:01:08.008 00:08:38 autobuild_llvm_precompile -- common/autobuild_common.sh@42 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a' 00:01:08.008 00:08:38 autobuild_llvm_precompile -- common/autobuild_common.sh@44 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:01:08.267 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:01:08.267 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:01:08.837 Using 'verbs' RDMA provider 00:01:24.665 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:36.881 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:36.881 Creating mk/config.mk...done. 00:01:36.881 Creating mk/cc.flags.mk...done. 00:01:36.881 Type 'make' to build. 00:01:36.881 00:01:36.881 real 0m28.660s 00:01:36.881 user 0m12.798s 00:01:36.881 sys 0m15.210s 00:01:36.881 00:09:07 autobuild_llvm_precompile -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:36.881 00:09:07 autobuild_llvm_precompile -- common/autotest_common.sh@10 -- $ set +x 00:01:36.881 ************************************ 00:01:36.881 END TEST autobuild_llvm_precompile 00:01:36.881 ************************************ 00:01:36.881 00:09:07 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:36.882 00:09:07 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:36.882 00:09:07 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:36.882 00:09:07 -- spdk/autobuild.sh@62 -- $ [[ 1 -eq 1 ]] 00:01:36.882 00:09:07 -- spdk/autobuild.sh@64 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:01:36.882 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:01:36.882 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:01:37.461 Using 'verbs' RDMA provider 00:01:50.609 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:02.826 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:02.826 Creating mk/config.mk...done. 00:02:02.826 Creating mk/cc.flags.mk...done. 00:02:02.826 Type 'make' to build. 00:02:02.826 00:09:31 -- spdk/autobuild.sh@70 -- $ run_test make make -j72 00:02:02.826 00:09:31 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:02.826 00:09:31 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:02.826 00:09:31 -- common/autotest_common.sh@10 -- $ set +x 00:02:02.826 ************************************ 00:02:02.826 START TEST make 00:02:02.826 ************************************ 00:02:02.826 00:09:32 make -- common/autotest_common.sh@1125 -- $ make -j72 00:02:02.826 make[1]: Nothing to be done for 'all'. 00:02:03.394 The Meson build system 00:02:03.394 Version: 1.5.0 00:02:03.394 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user 00:02:03.394 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:03.395 Build type: native build 00:02:03.395 Project name: libvfio-user 00:02:03.395 Project version: 0.0.1 00:02:03.395 C compiler for the host machine: clang-17 (clang 17.0.6 "clang version 17.0.6 (Fedora 17.0.6-2.fc39)") 00:02:03.395 C linker for the host machine: clang-17 ld.bfd 2.40-14 00:02:03.395 Host machine cpu family: x86_64 00:02:03.395 Host machine cpu: x86_64 00:02:03.395 Run-time dependency threads found: YES 00:02:03.395 Library dl found: YES 00:02:03.395 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:03.395 Run-time dependency json-c found: YES 0.17 00:02:03.395 Run-time dependency cmocka found: YES 1.1.7 00:02:03.395 Program pytest-3 found: NO 00:02:03.395 Program flake8 found: NO 00:02:03.395 Program misspell-fixer found: NO 00:02:03.395 Program restructuredtext-lint found: NO 00:02:03.395 Program valgrind found: YES (/usr/bin/valgrind) 00:02:03.395 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:03.395 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:03.395 Compiler for C supports arguments -Wwrite-strings: YES 00:02:03.395 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:03.395 Program test-lspci.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:03.395 Program test-linkage.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:03.395 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:03.395 Build targets in project: 8 00:02:03.395 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:03.395 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:03.395 00:02:03.395 libvfio-user 0.0.1 00:02:03.395 00:02:03.395 User defined options 00:02:03.395 buildtype : debug 00:02:03.395 default_library: static 00:02:03.395 libdir : /usr/local/lib 00:02:03.395 00:02:03.395 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:03.961 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:03.961 [1/36] Compiling C object samples/lspci.p/lspci.c.o 00:02:03.961 [2/36] Compiling C object lib/libvfio-user.a.p/irq.c.o 00:02:03.961 [3/36] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:03.961 [4/36] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:03.961 [5/36] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:03.961 [6/36] Compiling C object samples/null.p/null.c.o 00:02:03.961 [7/36] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:03.961 [8/36] Compiling C object lib/libvfio-user.a.p/migration.c.o 00:02:03.961 [9/36] Compiling C object lib/libvfio-user.a.p/dma.c.o 00:02:03.961 [10/36] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:03.961 [11/36] Compiling C object lib/libvfio-user.a.p/tran.c.o 00:02:03.961 [12/36] Compiling C object lib/libvfio-user.a.p/pci.c.o 00:02:03.961 [13/36] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:03.961 [14/36] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:03.961 [15/36] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:03.961 [16/36] Compiling C object lib/libvfio-user.a.p/pci_caps.c.o 00:02:03.961 [17/36] Compiling C object lib/libvfio-user.a.p/tran_sock.c.o 00:02:03.961 [18/36] Compiling C object test/unit_tests.p/mocks.c.o 00:02:03.961 [19/36] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:03.961 [20/36] Compiling C object samples/server.p/server.c.o 00:02:03.961 [21/36] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:03.961 [22/36] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:03.961 [23/36] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:03.961 [24/36] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:03.961 [25/36] Compiling C object samples/client.p/client.c.o 00:02:03.961 [26/36] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:03.961 [27/36] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:03.961 [28/36] Compiling C object lib/libvfio-user.a.p/libvfio-user.c.o 00:02:03.961 [29/36] Linking static target lib/libvfio-user.a 00:02:03.961 [30/36] Linking target samples/client 00:02:04.220 [31/36] Linking target test/unit_tests 00:02:04.220 [32/36] Linking target samples/server 00:02:04.220 [33/36] Linking target samples/lspci 00:02:04.220 [34/36] Linking target samples/null 00:02:04.220 [35/36] Linking target samples/gpio-pci-idio-16 00:02:04.220 [36/36] Linking target samples/shadow_ioeventfd_server 00:02:04.220 INFO: autodetecting backend as ninja 00:02:04.220 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:04.220 DESTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:04.478 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:04.478 ninja: no work to do. 00:02:11.043 The Meson build system 00:02:11.043 Version: 1.5.0 00:02:11.043 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk 00:02:11.043 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp 00:02:11.043 Build type: native build 00:02:11.043 Program cat found: YES (/usr/bin/cat) 00:02:11.043 Project name: DPDK 00:02:11.043 Project version: 24.03.0 00:02:11.043 C compiler for the host machine: clang-17 (clang 17.0.6 "clang version 17.0.6 (Fedora 17.0.6-2.fc39)") 00:02:11.043 C linker for the host machine: clang-17 ld.bfd 2.40-14 00:02:11.043 Host machine cpu family: x86_64 00:02:11.043 Host machine cpu: x86_64 00:02:11.043 Message: ## Building in Developer Mode ## 00:02:11.043 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:11.043 Program check-symbols.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:11.043 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:11.043 Program python3 found: YES (/usr/bin/python3) 00:02:11.043 Program cat found: YES (/usr/bin/cat) 00:02:11.043 Compiler for C supports arguments -march=native: YES 00:02:11.043 Checking for size of "void *" : 8 00:02:11.043 Checking for size of "void *" : 8 (cached) 00:02:11.043 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:11.043 Library m found: YES 00:02:11.043 Library numa found: YES 00:02:11.043 Has header "numaif.h" : YES 00:02:11.043 Library fdt found: NO 00:02:11.043 Library execinfo found: NO 00:02:11.043 Has header "execinfo.h" : YES 00:02:11.043 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:11.044 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:11.044 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:11.044 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:11.044 Run-time dependency openssl found: YES 3.1.1 00:02:11.044 Run-time dependency libpcap found: YES 1.10.4 00:02:11.044 Has header "pcap.h" with dependency libpcap: YES 00:02:11.044 Compiler for C supports arguments -Wcast-qual: YES 00:02:11.044 Compiler for C supports arguments -Wdeprecated: YES 00:02:11.044 Compiler for C supports arguments -Wformat: YES 00:02:11.044 Compiler for C supports arguments -Wformat-nonliteral: YES 00:02:11.044 Compiler for C supports arguments -Wformat-security: YES 00:02:11.044 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:11.044 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:11.044 Compiler for C supports arguments -Wnested-externs: YES 00:02:11.044 Compiler for C supports arguments -Wold-style-definition: YES 00:02:11.044 Compiler for C supports arguments -Wpointer-arith: YES 00:02:11.044 Compiler for C supports arguments -Wsign-compare: YES 00:02:11.044 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:11.044 Compiler for C supports arguments -Wundef: YES 00:02:11.044 Compiler for C supports arguments -Wwrite-strings: YES 00:02:11.044 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:11.044 Compiler for C supports arguments -Wno-packed-not-aligned: NO 00:02:11.044 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:11.044 Program objdump found: YES (/usr/bin/objdump) 00:02:11.044 Compiler for C supports arguments -mavx512f: YES 00:02:11.044 Checking if "AVX512 checking" compiles: YES 00:02:11.044 Fetching value of define "__SSE4_2__" : 1 00:02:11.044 Fetching value of define "__AES__" : 1 00:02:11.044 Fetching value of define "__AVX__" : 1 00:02:11.044 Fetching value of define "__AVX2__" : 1 00:02:11.044 Fetching value of define "__AVX512BW__" : 1 00:02:11.044 Fetching value of define "__AVX512CD__" : 1 00:02:11.044 Fetching value of define "__AVX512DQ__" : 1 00:02:11.044 Fetching value of define "__AVX512F__" : 1 00:02:11.044 Fetching value of define "__AVX512VL__" : 1 00:02:11.044 Fetching value of define "__PCLMUL__" : 1 00:02:11.044 Fetching value of define "__RDRND__" : 1 00:02:11.044 Fetching value of define "__RDSEED__" : 1 00:02:11.044 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:11.044 Fetching value of define "__znver1__" : (undefined) 00:02:11.044 Fetching value of define "__znver2__" : (undefined) 00:02:11.044 Fetching value of define "__znver3__" : (undefined) 00:02:11.044 Fetching value of define "__znver4__" : (undefined) 00:02:11.044 Compiler for C supports arguments -Wno-format-truncation: NO 00:02:11.044 Message: lib/log: Defining dependency "log" 00:02:11.044 Message: lib/kvargs: Defining dependency "kvargs" 00:02:11.044 Message: lib/telemetry: Defining dependency "telemetry" 00:02:11.044 Checking for function "getentropy" : NO 00:02:11.044 Message: lib/eal: Defining dependency "eal" 00:02:11.044 Message: lib/ring: Defining dependency "ring" 00:02:11.044 Message: lib/rcu: Defining dependency "rcu" 00:02:11.044 Message: lib/mempool: Defining dependency "mempool" 00:02:11.044 Message: lib/mbuf: Defining dependency "mbuf" 00:02:11.044 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:11.044 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:11.044 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:11.044 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:11.044 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:11.044 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:11.044 Compiler for C supports arguments -mpclmul: YES 00:02:11.044 Compiler for C supports arguments -maes: YES 00:02:11.044 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:11.044 Compiler for C supports arguments -mavx512bw: YES 00:02:11.044 Compiler for C supports arguments -mavx512dq: YES 00:02:11.044 Compiler for C supports arguments -mavx512vl: YES 00:02:11.044 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:11.044 Compiler for C supports arguments -mavx2: YES 00:02:11.044 Compiler for C supports arguments -mavx: YES 00:02:11.044 Message: lib/net: Defining dependency "net" 00:02:11.044 Message: lib/meter: Defining dependency "meter" 00:02:11.044 Message: lib/ethdev: Defining dependency "ethdev" 00:02:11.044 Message: lib/pci: Defining dependency "pci" 00:02:11.044 Message: lib/cmdline: Defining dependency "cmdline" 00:02:11.044 Message: lib/hash: Defining dependency "hash" 00:02:11.044 Message: lib/timer: Defining dependency "timer" 00:02:11.044 Message: lib/compressdev: Defining dependency "compressdev" 00:02:11.044 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:11.044 Message: lib/dmadev: Defining dependency "dmadev" 00:02:11.044 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:11.044 Message: lib/power: Defining dependency "power" 00:02:11.044 Message: lib/reorder: Defining dependency "reorder" 00:02:11.044 Message: lib/security: Defining dependency "security" 00:02:11.044 Has header "linux/userfaultfd.h" : YES 00:02:11.044 Has header "linux/vduse.h" : YES 00:02:11.044 Message: lib/vhost: Defining dependency "vhost" 00:02:11.044 Compiler for C supports arguments -Wno-format-truncation: NO (cached) 00:02:11.044 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:11.044 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:11.044 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:11.044 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:11.044 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:11.044 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:11.044 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:11.044 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:11.044 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:11.044 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:11.044 Configuring doxy-api-html.conf using configuration 00:02:11.044 Configuring doxy-api-man.conf using configuration 00:02:11.044 Program mandb found: YES (/usr/bin/mandb) 00:02:11.044 Program sphinx-build found: NO 00:02:11.044 Configuring rte_build_config.h using configuration 00:02:11.044 Message: 00:02:11.044 ================= 00:02:11.044 Applications Enabled 00:02:11.044 ================= 00:02:11.044 00:02:11.044 apps: 00:02:11.044 00:02:11.044 00:02:11.044 Message: 00:02:11.044 ================= 00:02:11.044 Libraries Enabled 00:02:11.044 ================= 00:02:11.044 00:02:11.044 libs: 00:02:11.044 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:11.044 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:11.044 cryptodev, dmadev, power, reorder, security, vhost, 00:02:11.044 00:02:11.044 Message: 00:02:11.044 =============== 00:02:11.044 Drivers Enabled 00:02:11.044 =============== 00:02:11.044 00:02:11.044 common: 00:02:11.044 00:02:11.044 bus: 00:02:11.044 pci, vdev, 00:02:11.044 mempool: 00:02:11.044 ring, 00:02:11.044 dma: 00:02:11.044 00:02:11.044 net: 00:02:11.044 00:02:11.044 crypto: 00:02:11.044 00:02:11.044 compress: 00:02:11.044 00:02:11.044 vdpa: 00:02:11.044 00:02:11.044 00:02:11.044 Message: 00:02:11.044 ================= 00:02:11.044 Content Skipped 00:02:11.044 ================= 00:02:11.044 00:02:11.044 apps: 00:02:11.044 dumpcap: explicitly disabled via build config 00:02:11.044 graph: explicitly disabled via build config 00:02:11.044 pdump: explicitly disabled via build config 00:02:11.044 proc-info: explicitly disabled via build config 00:02:11.044 test-acl: explicitly disabled via build config 00:02:11.044 test-bbdev: explicitly disabled via build config 00:02:11.044 test-cmdline: explicitly disabled via build config 00:02:11.044 test-compress-perf: explicitly disabled via build config 00:02:11.044 test-crypto-perf: explicitly disabled via build config 00:02:11.044 test-dma-perf: explicitly disabled via build config 00:02:11.044 test-eventdev: explicitly disabled via build config 00:02:11.044 test-fib: explicitly disabled via build config 00:02:11.044 test-flow-perf: explicitly disabled via build config 00:02:11.044 test-gpudev: explicitly disabled via build config 00:02:11.044 test-mldev: explicitly disabled via build config 00:02:11.044 test-pipeline: explicitly disabled via build config 00:02:11.044 test-pmd: explicitly disabled via build config 00:02:11.044 test-regex: explicitly disabled via build config 00:02:11.044 test-sad: explicitly disabled via build config 00:02:11.044 test-security-perf: explicitly disabled via build config 00:02:11.044 00:02:11.044 libs: 00:02:11.044 argparse: explicitly disabled via build config 00:02:11.044 metrics: explicitly disabled via build config 00:02:11.044 acl: explicitly disabled via build config 00:02:11.044 bbdev: explicitly disabled via build config 00:02:11.044 bitratestats: explicitly disabled via build config 00:02:11.044 bpf: explicitly disabled via build config 00:02:11.044 cfgfile: explicitly disabled via build config 00:02:11.044 distributor: explicitly disabled via build config 00:02:11.044 efd: explicitly disabled via build config 00:02:11.044 eventdev: explicitly disabled via build config 00:02:11.044 dispatcher: explicitly disabled via build config 00:02:11.044 gpudev: explicitly disabled via build config 00:02:11.044 gro: explicitly disabled via build config 00:02:11.044 gso: explicitly disabled via build config 00:02:11.044 ip_frag: explicitly disabled via build config 00:02:11.044 jobstats: explicitly disabled via build config 00:02:11.044 latencystats: explicitly disabled via build config 00:02:11.044 lpm: explicitly disabled via build config 00:02:11.044 member: explicitly disabled via build config 00:02:11.044 pcapng: explicitly disabled via build config 00:02:11.044 rawdev: explicitly disabled via build config 00:02:11.044 regexdev: explicitly disabled via build config 00:02:11.044 mldev: explicitly disabled via build config 00:02:11.044 rib: explicitly disabled via build config 00:02:11.044 sched: explicitly disabled via build config 00:02:11.044 stack: explicitly disabled via build config 00:02:11.044 ipsec: explicitly disabled via build config 00:02:11.044 pdcp: explicitly disabled via build config 00:02:11.044 fib: explicitly disabled via build config 00:02:11.044 port: explicitly disabled via build config 00:02:11.044 pdump: explicitly disabled via build config 00:02:11.045 table: explicitly disabled via build config 00:02:11.045 pipeline: explicitly disabled via build config 00:02:11.045 graph: explicitly disabled via build config 00:02:11.045 node: explicitly disabled via build config 00:02:11.045 00:02:11.045 drivers: 00:02:11.045 common/cpt: not in enabled drivers build config 00:02:11.045 common/dpaax: not in enabled drivers build config 00:02:11.045 common/iavf: not in enabled drivers build config 00:02:11.045 common/idpf: not in enabled drivers build config 00:02:11.045 common/ionic: not in enabled drivers build config 00:02:11.045 common/mvep: not in enabled drivers build config 00:02:11.045 common/octeontx: not in enabled drivers build config 00:02:11.045 bus/auxiliary: not in enabled drivers build config 00:02:11.045 bus/cdx: not in enabled drivers build config 00:02:11.045 bus/dpaa: not in enabled drivers build config 00:02:11.045 bus/fslmc: not in enabled drivers build config 00:02:11.045 bus/ifpga: not in enabled drivers build config 00:02:11.045 bus/platform: not in enabled drivers build config 00:02:11.045 bus/uacce: not in enabled drivers build config 00:02:11.045 bus/vmbus: not in enabled drivers build config 00:02:11.045 common/cnxk: not in enabled drivers build config 00:02:11.045 common/mlx5: not in enabled drivers build config 00:02:11.045 common/nfp: not in enabled drivers build config 00:02:11.045 common/nitrox: not in enabled drivers build config 00:02:11.045 common/qat: not in enabled drivers build config 00:02:11.045 common/sfc_efx: not in enabled drivers build config 00:02:11.045 mempool/bucket: not in enabled drivers build config 00:02:11.045 mempool/cnxk: not in enabled drivers build config 00:02:11.045 mempool/dpaa: not in enabled drivers build config 00:02:11.045 mempool/dpaa2: not in enabled drivers build config 00:02:11.045 mempool/octeontx: not in enabled drivers build config 00:02:11.045 mempool/stack: not in enabled drivers build config 00:02:11.045 dma/cnxk: not in enabled drivers build config 00:02:11.045 dma/dpaa: not in enabled drivers build config 00:02:11.045 dma/dpaa2: not in enabled drivers build config 00:02:11.045 dma/hisilicon: not in enabled drivers build config 00:02:11.045 dma/idxd: not in enabled drivers build config 00:02:11.045 dma/ioat: not in enabled drivers build config 00:02:11.045 dma/skeleton: not in enabled drivers build config 00:02:11.045 net/af_packet: not in enabled drivers build config 00:02:11.045 net/af_xdp: not in enabled drivers build config 00:02:11.045 net/ark: not in enabled drivers build config 00:02:11.045 net/atlantic: not in enabled drivers build config 00:02:11.045 net/avp: not in enabled drivers build config 00:02:11.045 net/axgbe: not in enabled drivers build config 00:02:11.045 net/bnx2x: not in enabled drivers build config 00:02:11.045 net/bnxt: not in enabled drivers build config 00:02:11.045 net/bonding: not in enabled drivers build config 00:02:11.045 net/cnxk: not in enabled drivers build config 00:02:11.045 net/cpfl: not in enabled drivers build config 00:02:11.045 net/cxgbe: not in enabled drivers build config 00:02:11.045 net/dpaa: not in enabled drivers build config 00:02:11.045 net/dpaa2: not in enabled drivers build config 00:02:11.045 net/e1000: not in enabled drivers build config 00:02:11.045 net/ena: not in enabled drivers build config 00:02:11.045 net/enetc: not in enabled drivers build config 00:02:11.045 net/enetfec: not in enabled drivers build config 00:02:11.045 net/enic: not in enabled drivers build config 00:02:11.045 net/failsafe: not in enabled drivers build config 00:02:11.045 net/fm10k: not in enabled drivers build config 00:02:11.045 net/gve: not in enabled drivers build config 00:02:11.045 net/hinic: not in enabled drivers build config 00:02:11.045 net/hns3: not in enabled drivers build config 00:02:11.045 net/i40e: not in enabled drivers build config 00:02:11.045 net/iavf: not in enabled drivers build config 00:02:11.045 net/ice: not in enabled drivers build config 00:02:11.045 net/idpf: not in enabled drivers build config 00:02:11.045 net/igc: not in enabled drivers build config 00:02:11.045 net/ionic: not in enabled drivers build config 00:02:11.045 net/ipn3ke: not in enabled drivers build config 00:02:11.045 net/ixgbe: not in enabled drivers build config 00:02:11.045 net/mana: not in enabled drivers build config 00:02:11.045 net/memif: not in enabled drivers build config 00:02:11.045 net/mlx4: not in enabled drivers build config 00:02:11.045 net/mlx5: not in enabled drivers build config 00:02:11.045 net/mvneta: not in enabled drivers build config 00:02:11.045 net/mvpp2: not in enabled drivers build config 00:02:11.045 net/netvsc: not in enabled drivers build config 00:02:11.045 net/nfb: not in enabled drivers build config 00:02:11.045 net/nfp: not in enabled drivers build config 00:02:11.045 net/ngbe: not in enabled drivers build config 00:02:11.045 net/null: not in enabled drivers build config 00:02:11.045 net/octeontx: not in enabled drivers build config 00:02:11.045 net/octeon_ep: not in enabled drivers build config 00:02:11.045 net/pcap: not in enabled drivers build config 00:02:11.045 net/pfe: not in enabled drivers build config 00:02:11.045 net/qede: not in enabled drivers build config 00:02:11.045 net/ring: not in enabled drivers build config 00:02:11.045 net/sfc: not in enabled drivers build config 00:02:11.045 net/softnic: not in enabled drivers build config 00:02:11.045 net/tap: not in enabled drivers build config 00:02:11.045 net/thunderx: not in enabled drivers build config 00:02:11.045 net/txgbe: not in enabled drivers build config 00:02:11.045 net/vdev_netvsc: not in enabled drivers build config 00:02:11.045 net/vhost: not in enabled drivers build config 00:02:11.045 net/virtio: not in enabled drivers build config 00:02:11.045 net/vmxnet3: not in enabled drivers build config 00:02:11.045 raw/*: missing internal dependency, "rawdev" 00:02:11.045 crypto/armv8: not in enabled drivers build config 00:02:11.045 crypto/bcmfs: not in enabled drivers build config 00:02:11.045 crypto/caam_jr: not in enabled drivers build config 00:02:11.045 crypto/ccp: not in enabled drivers build config 00:02:11.045 crypto/cnxk: not in enabled drivers build config 00:02:11.045 crypto/dpaa_sec: not in enabled drivers build config 00:02:11.045 crypto/dpaa2_sec: not in enabled drivers build config 00:02:11.045 crypto/ipsec_mb: not in enabled drivers build config 00:02:11.045 crypto/mlx5: not in enabled drivers build config 00:02:11.045 crypto/mvsam: not in enabled drivers build config 00:02:11.045 crypto/nitrox: not in enabled drivers build config 00:02:11.045 crypto/null: not in enabled drivers build config 00:02:11.045 crypto/octeontx: not in enabled drivers build config 00:02:11.045 crypto/openssl: not in enabled drivers build config 00:02:11.045 crypto/scheduler: not in enabled drivers build config 00:02:11.045 crypto/uadk: not in enabled drivers build config 00:02:11.045 crypto/virtio: not in enabled drivers build config 00:02:11.045 compress/isal: not in enabled drivers build config 00:02:11.045 compress/mlx5: not in enabled drivers build config 00:02:11.045 compress/nitrox: not in enabled drivers build config 00:02:11.045 compress/octeontx: not in enabled drivers build config 00:02:11.045 compress/zlib: not in enabled drivers build config 00:02:11.045 regex/*: missing internal dependency, "regexdev" 00:02:11.045 ml/*: missing internal dependency, "mldev" 00:02:11.045 vdpa/ifc: not in enabled drivers build config 00:02:11.045 vdpa/mlx5: not in enabled drivers build config 00:02:11.045 vdpa/nfp: not in enabled drivers build config 00:02:11.045 vdpa/sfc: not in enabled drivers build config 00:02:11.045 event/*: missing internal dependency, "eventdev" 00:02:11.045 baseband/*: missing internal dependency, "bbdev" 00:02:11.045 gpu/*: missing internal dependency, "gpudev" 00:02:11.045 00:02:11.045 00:02:11.045 Build targets in project: 85 00:02:11.045 00:02:11.045 DPDK 24.03.0 00:02:11.045 00:02:11.045 User defined options 00:02:11.045 buildtype : debug 00:02:11.045 default_library : static 00:02:11.045 libdir : lib 00:02:11.045 prefix : /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:02:11.045 c_args : -fPIC -Werror 00:02:11.045 c_link_args : 00:02:11.045 cpu_instruction_set: native 00:02:11.045 disable_apps : test-dma-perf,test,test-sad,test-acl,test-pmd,test-mldev,test-compress-perf,test-cmdline,test-regex,test-fib,graph,test-bbdev,dumpcap,test-gpudev,proc-info,test-pipeline,test-flow-perf,test-crypto-perf,pdump,test-eventdev,test-security-perf 00:02:11.045 disable_libs : port,lpm,ipsec,regexdev,dispatcher,argparse,bitratestats,rawdev,stack,graph,acl,bbdev,pipeline,member,sched,pcapng,mldev,eventdev,efd,metrics,latencystats,cfgfile,ip_frag,jobstats,pdump,pdcp,rib,node,fib,distributor,gso,table,bpf,gpudev,gro 00:02:11.045 enable_docs : false 00:02:11.045 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:11.045 enable_kmods : false 00:02:11.045 max_lcores : 128 00:02:11.045 tests : false 00:02:11.045 00:02:11.045 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:11.045 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp' 00:02:11.045 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:11.045 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:11.045 [3/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:11.045 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:11.045 [5/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:11.045 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:11.045 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:11.045 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:11.045 [9/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:11.045 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:11.045 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:11.045 [12/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:11.045 [13/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:11.045 [14/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:11.045 [15/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:11.045 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:11.045 [17/268] Linking static target lib/librte_kvargs.a 00:02:11.045 [18/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:11.045 [19/268] Linking static target lib/librte_log.a 00:02:11.625 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:11.625 [21/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:11.625 [22/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:11.625 [23/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:11.625 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:11.625 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:11.625 [26/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:11.625 [27/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:11.625 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:11.625 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:11.625 [30/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:11.625 [31/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:11.625 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:11.625 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:11.625 [34/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:11.625 [35/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:11.625 [36/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:11.625 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:11.625 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:11.625 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:11.625 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:11.625 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:11.625 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:11.625 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:11.625 [44/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:11.625 [45/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:11.625 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:11.625 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:11.625 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:11.625 [49/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:11.625 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:11.625 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:11.625 [52/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:11.625 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:11.625 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:11.625 [55/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:11.625 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:11.625 [57/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:11.625 [58/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:11.625 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:11.625 [60/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:11.625 [61/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:11.625 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:11.625 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:11.625 [64/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:11.625 [65/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:11.625 [66/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:11.625 [67/268] Linking static target lib/librte_telemetry.a 00:02:11.625 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:11.625 [69/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:11.625 [70/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:11.625 [71/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:11.625 [72/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:11.625 [73/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:11.625 [74/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:11.625 [75/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:11.625 [76/268] Linking static target lib/librte_ring.a 00:02:11.625 [77/268] Linking static target lib/librte_pci.a 00:02:11.625 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:11.625 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:11.625 [80/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:11.625 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:11.625 [82/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:11.625 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:11.625 [84/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:11.625 [85/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:11.625 [86/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:11.625 [87/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:11.625 [88/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.625 [89/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:11.625 [90/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:11.625 [91/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:11.625 [92/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:11.625 [93/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:11.625 [94/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:11.625 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:11.625 [96/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:11.625 [97/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:11.625 [98/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:11.625 [99/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:11.625 [100/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:11.625 [101/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:11.625 [102/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:11.625 [103/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:11.625 [104/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:11.625 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:11.625 [106/268] Linking static target lib/librte_eal.a 00:02:11.885 [107/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:11.885 [108/268] Linking static target lib/librte_mempool.a 00:02:11.885 [109/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:11.885 [110/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:11.885 [111/268] Linking static target lib/librte_rcu.a 00:02:11.885 [112/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:11.885 [113/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:11.885 [114/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:11.885 [115/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.885 [116/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.144 [117/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:12.144 [118/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:12.144 [119/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:12.144 [120/268] Linking static target lib/librte_meter.a 00:02:12.144 [121/268] Linking static target lib/librte_mbuf.a 00:02:12.144 [122/268] Linking static target lib/librte_net.a 00:02:12.144 [123/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.144 [124/268] Linking target lib/librte_log.so.24.1 00:02:12.144 [125/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:12.144 [126/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:12.144 [127/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:12.144 [128/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:12.144 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:12.144 [130/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.144 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:12.144 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:12.144 [133/268] Linking static target lib/librte_timer.a 00:02:12.144 [134/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.144 [135/268] Linking static target lib/librte_cmdline.a 00:02:12.144 [136/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:12.144 [137/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:12.144 [138/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:12.144 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:12.144 [140/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:12.144 [141/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:12.144 [142/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:12.144 [143/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:12.144 [144/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:12.144 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:12.144 [146/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:12.144 [147/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:12.144 [148/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:12.144 [149/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:12.144 [150/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:12.144 [151/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:12.144 [152/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:12.408 [153/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:12.408 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:12.408 [155/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:12.408 [156/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:12.408 [157/268] Linking target lib/librte_telemetry.so.24.1 00:02:12.408 [158/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:12.408 [159/268] Linking target lib/librte_kvargs.so.24.1 00:02:12.408 [160/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:12.408 [161/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:12.408 [162/268] Linking static target lib/librte_dmadev.a 00:02:12.408 [163/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:12.408 [164/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:12.408 [165/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:12.408 [166/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:12.408 [167/268] Linking static target lib/librte_compressdev.a 00:02:12.408 [168/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.408 [169/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:12.408 [170/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:12.408 [171/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:12.408 [172/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:12.408 [173/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.408 [174/268] Linking static target lib/librte_reorder.a 00:02:12.408 [175/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:12.408 [176/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:12.408 [177/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:12.408 [178/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:12.408 [179/268] Linking static target lib/librte_power.a 00:02:12.409 [180/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:12.409 [181/268] Linking static target lib/librte_security.a 00:02:12.409 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:12.409 [183/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:12.409 [184/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:12.409 [185/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:12.409 [186/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:12.409 [187/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:12.409 [188/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:12.409 [189/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:12.409 [190/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:12.409 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:12.409 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:12.409 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:12.409 [194/268] Linking static target lib/librte_hash.a 00:02:12.409 [195/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:12.409 [196/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:12.713 [197/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:12.713 [198/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.713 [199/268] Linking static target lib/librte_cryptodev.a 00:02:12.713 [200/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:12.713 [201/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:12.713 [202/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:12.713 [203/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:12.713 [204/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:12.713 [205/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.713 [206/268] Linking static target drivers/librte_bus_vdev.a 00:02:12.713 [207/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:12.713 [208/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.713 [209/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:12.713 [210/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:12.713 [211/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:12.713 [212/268] Linking static target drivers/librte_bus_pci.a 00:02:12.713 [213/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:12.713 [214/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:12.713 [215/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.713 [216/268] Linking static target drivers/librte_mempool_ring.a 00:02:12.713 [217/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:13.023 [218/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:13.023 [219/268] Linking static target lib/librte_ethdev.a 00:02:13.023 [220/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.023 [221/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.023 [222/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.023 [223/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.280 [224/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.280 [225/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.549 [226/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.549 [227/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:13.549 [228/268] Linking static target lib/librte_vhost.a 00:02:13.549 [229/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.921 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.487 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.610 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.610 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.610 [234/268] Linking target lib/librte_eal.so.24.1 00:02:23.868 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:23.868 [236/268] Linking target lib/librte_ring.so.24.1 00:02:23.868 [237/268] Linking target lib/librte_meter.so.24.1 00:02:23.868 [238/268] Linking target lib/librte_timer.so.24.1 00:02:23.868 [239/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:23.868 [240/268] Linking target lib/librte_dmadev.so.24.1 00:02:23.868 [241/268] Linking target lib/librte_pci.so.24.1 00:02:24.126 [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:24.126 [243/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:24.126 [244/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:24.126 [245/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:24.126 [246/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:24.126 [247/268] Linking target lib/librte_mempool.so.24.1 00:02:24.126 [248/268] Linking target lib/librte_rcu.so.24.1 00:02:24.126 [249/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:24.126 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:24.126 [251/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:24.385 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:24.385 [253/268] Linking target lib/librte_mbuf.so.24.1 00:02:24.385 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:24.385 [255/268] Linking target lib/librte_compressdev.so.24.1 00:02:24.385 [256/268] Linking target lib/librte_net.so.24.1 00:02:24.385 [257/268] Linking target lib/librte_reorder.so.24.1 00:02:24.642 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:02:24.642 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:24.642 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:24.642 [261/268] Linking target lib/librte_hash.so.24.1 00:02:24.642 [262/268] Linking target lib/librte_cmdline.so.24.1 00:02:24.642 [263/268] Linking target lib/librte_ethdev.so.24.1 00:02:24.642 [264/268] Linking target lib/librte_security.so.24.1 00:02:24.901 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:24.901 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:24.901 [267/268] Linking target lib/librte_power.so.24.1 00:02:24.901 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:24.901 INFO: autodetecting backend as ninja 00:02:24.901 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp -j 72 00:02:25.832 CC lib/ut_mock/mock.o 00:02:25.832 CC lib/ut/ut.o 00:02:25.832 CC lib/log/log.o 00:02:25.832 CC lib/log/log_flags.o 00:02:25.832 CC lib/log/log_deprecated.o 00:02:26.090 LIB libspdk_ut_mock.a 00:02:26.090 LIB libspdk_ut.a 00:02:26.090 LIB libspdk_log.a 00:02:26.348 CC lib/util/base64.o 00:02:26.348 CC lib/util/bit_array.o 00:02:26.348 CC lib/util/cpuset.o 00:02:26.348 CC lib/util/crc16.o 00:02:26.348 CC lib/util/crc32_ieee.o 00:02:26.348 CC lib/util/crc32.o 00:02:26.348 CC lib/util/crc32c.o 00:02:26.348 CC lib/util/dif.o 00:02:26.348 CC lib/util/crc64.o 00:02:26.348 CC lib/dma/dma.o 00:02:26.348 CC lib/util/fd_group.o 00:02:26.348 CC lib/util/fd.o 00:02:26.348 CC lib/util/hexlify.o 00:02:26.348 CC lib/util/iov.o 00:02:26.348 CC lib/util/file.o 00:02:26.348 CC lib/util/net.o 00:02:26.348 CC lib/util/math.o 00:02:26.348 CC lib/util/pipe.o 00:02:26.348 CC lib/util/strerror_tls.o 00:02:26.348 CC lib/util/string.o 00:02:26.348 CC lib/util/uuid.o 00:02:26.348 CC lib/ioat/ioat.o 00:02:26.348 CC lib/util/xor.o 00:02:26.348 CXX lib/trace_parser/trace.o 00:02:26.348 CC lib/util/zipf.o 00:02:26.348 CC lib/util/md5.o 00:02:26.348 CC lib/vfio_user/host/vfio_user_pci.o 00:02:26.348 CC lib/vfio_user/host/vfio_user.o 00:02:26.606 LIB libspdk_dma.a 00:02:26.606 LIB libspdk_ioat.a 00:02:26.606 LIB libspdk_vfio_user.a 00:02:26.606 LIB libspdk_util.a 00:02:26.864 LIB libspdk_trace_parser.a 00:02:26.864 CC lib/json/json_util.o 00:02:26.864 CC lib/json/json_parse.o 00:02:26.864 CC lib/rdma_provider/common.o 00:02:26.864 CC lib/json/json_write.o 00:02:26.864 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:26.864 CC lib/env_dpdk/env.o 00:02:26.864 CC lib/env_dpdk/memory.o 00:02:26.864 CC lib/env_dpdk/pci.o 00:02:26.864 CC lib/conf/conf.o 00:02:26.864 CC lib/env_dpdk/init.o 00:02:26.864 CC lib/env_dpdk/threads.o 00:02:26.864 CC lib/env_dpdk/pci_ioat.o 00:02:26.864 CC lib/env_dpdk/pci_virtio.o 00:02:26.864 CC lib/rdma_utils/rdma_utils.o 00:02:26.864 CC lib/env_dpdk/pci_dpdk.o 00:02:26.864 CC lib/env_dpdk/pci_vmd.o 00:02:26.864 CC lib/env_dpdk/pci_idxd.o 00:02:26.864 CC lib/env_dpdk/pci_event.o 00:02:26.864 CC lib/env_dpdk/sigbus_handler.o 00:02:26.864 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:26.865 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:26.865 CC lib/vmd/vmd.o 00:02:26.865 CC lib/vmd/led.o 00:02:26.865 CC lib/idxd/idxd_kernel.o 00:02:26.865 CC lib/idxd/idxd_user.o 00:02:26.865 CC lib/idxd/idxd.o 00:02:27.122 LIB libspdk_rdma_provider.a 00:02:27.122 LIB libspdk_conf.a 00:02:27.122 LIB libspdk_json.a 00:02:27.122 LIB libspdk_rdma_utils.a 00:02:27.380 LIB libspdk_idxd.a 00:02:27.380 LIB libspdk_vmd.a 00:02:27.380 CC lib/jsonrpc/jsonrpc_server.o 00:02:27.380 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:27.380 CC lib/jsonrpc/jsonrpc_client.o 00:02:27.380 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:27.638 LIB libspdk_jsonrpc.a 00:02:27.904 CC lib/rpc/rpc.o 00:02:27.904 LIB libspdk_env_dpdk.a 00:02:28.162 LIB libspdk_rpc.a 00:02:28.421 CC lib/trace/trace_flags.o 00:02:28.421 CC lib/trace/trace.o 00:02:28.421 CC lib/trace/trace_rpc.o 00:02:28.421 CC lib/notify/notify.o 00:02:28.421 CC lib/notify/notify_rpc.o 00:02:28.421 CC lib/keyring/keyring_rpc.o 00:02:28.421 CC lib/keyring/keyring.o 00:02:28.421 LIB libspdk_notify.a 00:02:28.421 LIB libspdk_trace.a 00:02:28.679 LIB libspdk_keyring.a 00:02:28.680 CC lib/thread/thread.o 00:02:28.680 CC lib/thread/iobuf.o 00:02:28.680 CC lib/sock/sock.o 00:02:28.680 CC lib/sock/sock_rpc.o 00:02:28.938 LIB libspdk_sock.a 00:02:29.506 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:29.506 CC lib/nvme/nvme_ctrlr.o 00:02:29.506 CC lib/nvme/nvme_ns.o 00:02:29.506 CC lib/nvme/nvme_fabric.o 00:02:29.506 CC lib/nvme/nvme_ns_cmd.o 00:02:29.506 CC lib/nvme/nvme_pcie.o 00:02:29.506 CC lib/nvme/nvme_pcie_common.o 00:02:29.506 CC lib/nvme/nvme_qpair.o 00:02:29.506 CC lib/nvme/nvme.o 00:02:29.506 CC lib/nvme/nvme_quirks.o 00:02:29.506 CC lib/nvme/nvme_transport.o 00:02:29.506 CC lib/nvme/nvme_discovery.o 00:02:29.506 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:29.506 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:29.506 CC lib/nvme/nvme_tcp.o 00:02:29.506 CC lib/nvme/nvme_opal.o 00:02:29.506 CC lib/nvme/nvme_io_msg.o 00:02:29.506 CC lib/nvme/nvme_poll_group.o 00:02:29.506 CC lib/nvme/nvme_zns.o 00:02:29.506 CC lib/nvme/nvme_stubs.o 00:02:29.506 CC lib/nvme/nvme_auth.o 00:02:29.506 CC lib/nvme/nvme_cuse.o 00:02:29.506 CC lib/nvme/nvme_vfio_user.o 00:02:29.506 CC lib/nvme/nvme_rdma.o 00:02:29.506 LIB libspdk_thread.a 00:02:29.765 CC lib/virtio/virtio_vhost_user.o 00:02:29.765 CC lib/virtio/virtio.o 00:02:29.765 CC lib/virtio/virtio_pci.o 00:02:29.765 CC lib/virtio/virtio_vfio_user.o 00:02:29.765 CC lib/accel/accel.o 00:02:29.765 CC lib/accel/accel_sw.o 00:02:29.765 CC lib/accel/accel_rpc.o 00:02:29.765 CC lib/blob/blobstore.o 00:02:29.765 CC lib/blob/zeroes.o 00:02:29.765 CC lib/blob/request.o 00:02:29.765 CC lib/blob/blob_bs_dev.o 00:02:29.765 CC lib/fsdev/fsdev.o 00:02:29.765 CC lib/fsdev/fsdev_io.o 00:02:29.765 CC lib/fsdev/fsdev_rpc.o 00:02:29.765 CC lib/vfu_tgt/tgt_endpoint.o 00:02:29.765 CC lib/vfu_tgt/tgt_rpc.o 00:02:29.765 CC lib/init/json_config.o 00:02:29.765 CC lib/init/subsystem.o 00:02:29.765 CC lib/init/subsystem_rpc.o 00:02:29.765 CC lib/init/rpc.o 00:02:30.024 LIB libspdk_virtio.a 00:02:30.024 LIB libspdk_init.a 00:02:30.024 LIB libspdk_vfu_tgt.a 00:02:30.284 LIB libspdk_fsdev.a 00:02:30.284 CC lib/event/app.o 00:02:30.284 CC lib/event/reactor.o 00:02:30.284 CC lib/event/log_rpc.o 00:02:30.284 CC lib/event/app_rpc.o 00:02:30.284 CC lib/event/scheduler_static.o 00:02:30.543 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:30.543 LIB libspdk_event.a 00:02:30.543 LIB libspdk_accel.a 00:02:30.802 LIB libspdk_nvme.a 00:02:30.802 CC lib/bdev/bdev_zone.o 00:02:30.802 CC lib/bdev/bdev.o 00:02:30.802 CC lib/bdev/part.o 00:02:30.802 CC lib/bdev/scsi_nvme.o 00:02:30.802 CC lib/bdev/bdev_rpc.o 00:02:30.802 LIB libspdk_fuse_dispatcher.a 00:02:31.741 LIB libspdk_blob.a 00:02:32.000 CC lib/blobfs/blobfs.o 00:02:32.000 CC lib/blobfs/tree.o 00:02:32.000 CC lib/lvol/lvol.o 00:02:32.574 LIB libspdk_lvol.a 00:02:32.574 LIB libspdk_blobfs.a 00:02:32.574 LIB libspdk_bdev.a 00:02:32.839 CC lib/nvmf/ctrlr.o 00:02:32.839 CC lib/nvmf/ctrlr_discovery.o 00:02:32.839 CC lib/ftl/ftl_core.o 00:02:32.839 CC lib/nvmf/ctrlr_bdev.o 00:02:32.839 CC lib/ftl/ftl_init.o 00:02:32.839 CC lib/nvmf/subsystem.o 00:02:32.839 CC lib/ftl/ftl_layout.o 00:02:32.839 CC lib/nvmf/nvmf.o 00:02:32.839 CC lib/nvmf/nvmf_rpc.o 00:02:32.839 CC lib/ftl/ftl_debug.o 00:02:32.839 CC lib/nvmf/transport.o 00:02:32.839 CC lib/ftl/ftl_io.o 00:02:32.840 CC lib/nvmf/tcp.o 00:02:32.840 CC lib/ftl/ftl_sb.o 00:02:32.840 CC lib/nvmf/stubs.o 00:02:32.840 CC lib/ftl/ftl_l2p.o 00:02:32.840 CC lib/nvmf/rdma.o 00:02:32.840 CC lib/nvmf/vfio_user.o 00:02:32.840 CC lib/nvmf/mdns_server.o 00:02:32.840 CC lib/ftl/ftl_l2p_flat.o 00:02:32.840 CC lib/ftl/ftl_nv_cache.o 00:02:32.840 CC lib/nvmf/auth.o 00:02:32.840 CC lib/ftl/ftl_band.o 00:02:32.840 CC lib/ftl/ftl_band_ops.o 00:02:32.840 CC lib/ftl/ftl_writer.o 00:02:32.840 CC lib/ftl/ftl_reloc.o 00:02:32.840 CC lib/nbd/nbd.o 00:02:32.840 CC lib/ftl/ftl_rq.o 00:02:32.840 CC lib/nbd/nbd_rpc.o 00:02:32.840 CC lib/ftl/ftl_l2p_cache.o 00:02:32.840 CC lib/ftl/ftl_p2l.o 00:02:32.840 CC lib/ftl/ftl_p2l_log.o 00:02:32.840 CC lib/ftl/mngt/ftl_mngt.o 00:02:32.840 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:32.840 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:32.840 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:32.840 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:32.840 CC lib/ublk/ublk.o 00:02:32.840 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:32.840 CC lib/ublk/ublk_rpc.o 00:02:32.840 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:32.840 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:32.840 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:32.840 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:32.840 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:32.840 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:32.840 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:32.840 CC lib/ftl/utils/ftl_conf.o 00:02:32.840 CC lib/ftl/utils/ftl_md.o 00:02:32.840 CC lib/ftl/utils/ftl_mempool.o 00:02:32.840 CC lib/ftl/utils/ftl_bitmap.o 00:02:32.840 CC lib/ftl/utils/ftl_property.o 00:02:32.840 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:32.840 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:32.840 CC lib/scsi/dev.o 00:02:32.840 CC lib/scsi/port.o 00:02:32.840 CC lib/scsi/lun.o 00:02:32.840 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:32.840 CC lib/scsi/scsi_bdev.o 00:02:32.840 CC lib/scsi/scsi.o 00:02:32.840 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:32.840 CC lib/scsi/scsi_pr.o 00:02:32.840 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:32.840 CC lib/scsi/scsi_rpc.o 00:02:32.840 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:32.840 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:32.840 CC lib/scsi/task.o 00:02:32.840 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:32.840 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:32.840 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:32.840 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:32.840 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:33.097 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:33.097 CC lib/ftl/base/ftl_base_dev.o 00:02:33.097 CC lib/ftl/base/ftl_base_bdev.o 00:02:33.097 CC lib/ftl/ftl_trace.o 00:02:33.357 LIB libspdk_nbd.a 00:02:33.357 LIB libspdk_scsi.a 00:02:33.616 LIB libspdk_ublk.a 00:02:33.616 LIB libspdk_ftl.a 00:02:33.616 CC lib/vhost/vhost.o 00:02:33.616 CC lib/vhost/vhost_rpc.o 00:02:33.616 CC lib/iscsi/iscsi.o 00:02:33.616 CC lib/iscsi/conn.o 00:02:33.616 CC lib/vhost/vhost_scsi.o 00:02:33.616 CC lib/iscsi/init_grp.o 00:02:33.616 CC lib/vhost/vhost_blk.o 00:02:33.616 CC lib/vhost/rte_vhost_user.o 00:02:33.616 CC lib/iscsi/param.o 00:02:33.616 CC lib/iscsi/portal_grp.o 00:02:33.616 CC lib/iscsi/tgt_node.o 00:02:33.616 CC lib/iscsi/iscsi_subsystem.o 00:02:33.616 CC lib/iscsi/iscsi_rpc.o 00:02:33.616 CC lib/iscsi/task.o 00:02:34.182 LIB libspdk_nvmf.a 00:02:34.440 LIB libspdk_vhost.a 00:02:34.440 LIB libspdk_iscsi.a 00:02:35.008 CC module/env_dpdk/env_dpdk_rpc.o 00:02:35.008 CC module/vfu_device/vfu_virtio.o 00:02:35.008 CC module/vfu_device/vfu_virtio_scsi.o 00:02:35.008 CC module/vfu_device/vfu_virtio_rpc.o 00:02:35.008 CC module/vfu_device/vfu_virtio_blk.o 00:02:35.008 CC module/vfu_device/vfu_virtio_fs.o 00:02:35.008 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:35.008 CC module/accel/iaa/accel_iaa.o 00:02:35.008 CC module/accel/iaa/accel_iaa_rpc.o 00:02:35.008 CC module/scheduler/gscheduler/gscheduler.o 00:02:35.008 LIB libspdk_env_dpdk_rpc.a 00:02:35.008 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:35.008 CC module/accel/error/accel_error.o 00:02:35.008 CC module/accel/error/accel_error_rpc.o 00:02:35.008 CC module/sock/posix/posix.o 00:02:35.008 CC module/accel/dsa/accel_dsa_rpc.o 00:02:35.008 CC module/accel/dsa/accel_dsa.o 00:02:35.008 CC module/accel/ioat/accel_ioat.o 00:02:35.008 CC module/accel/ioat/accel_ioat_rpc.o 00:02:35.008 CC module/keyring/file/keyring.o 00:02:35.008 CC module/blob/bdev/blob_bdev.o 00:02:35.008 CC module/keyring/file/keyring_rpc.o 00:02:35.008 CC module/keyring/linux/keyring.o 00:02:35.008 CC module/keyring/linux/keyring_rpc.o 00:02:35.008 CC module/fsdev/aio/fsdev_aio.o 00:02:35.008 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:35.008 CC module/fsdev/aio/linux_aio_mgr.o 00:02:35.008 LIB libspdk_scheduler_gscheduler.a 00:02:35.267 LIB libspdk_scheduler_dpdk_governor.a 00:02:35.267 LIB libspdk_keyring_linux.a 00:02:35.267 LIB libspdk_accel_error.a 00:02:35.267 LIB libspdk_keyring_file.a 00:02:35.267 LIB libspdk_scheduler_dynamic.a 00:02:35.267 LIB libspdk_accel_iaa.a 00:02:35.267 LIB libspdk_accel_ioat.a 00:02:35.267 LIB libspdk_blob_bdev.a 00:02:35.267 LIB libspdk_accel_dsa.a 00:02:35.267 LIB libspdk_vfu_device.a 00:02:35.525 LIB libspdk_sock_posix.a 00:02:35.525 LIB libspdk_fsdev_aio.a 00:02:35.525 CC module/bdev/lvol/vbdev_lvol.o 00:02:35.525 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:35.525 CC module/bdev/iscsi/bdev_iscsi.o 00:02:35.525 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:35.525 CC module/bdev/delay/vbdev_delay.o 00:02:35.525 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:35.525 CC module/bdev/passthru/vbdev_passthru.o 00:02:35.525 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:35.525 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:35.526 CC module/bdev/gpt/gpt.o 00:02:35.526 CC module/bdev/malloc/bdev_malloc.o 00:02:35.526 CC module/bdev/gpt/vbdev_gpt.o 00:02:35.526 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:35.526 CC module/bdev/nvme/nvme_rpc.o 00:02:35.526 CC module/bdev/nvme/bdev_nvme.o 00:02:35.526 CC module/bdev/nvme/bdev_mdns_client.o 00:02:35.526 CC module/bdev/nvme/vbdev_opal.o 00:02:35.526 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:35.526 CC module/bdev/split/vbdev_split.o 00:02:35.526 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:35.526 CC module/bdev/split/vbdev_split_rpc.o 00:02:35.526 CC module/bdev/aio/bdev_aio_rpc.o 00:02:35.526 CC module/bdev/aio/bdev_aio.o 00:02:35.526 CC module/bdev/raid/bdev_raid.o 00:02:35.526 CC module/bdev/raid/bdev_raid_rpc.o 00:02:35.526 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:35.526 CC module/bdev/raid/concat.o 00:02:35.526 CC module/bdev/raid/raid1.o 00:02:35.526 CC module/bdev/raid/bdev_raid_sb.o 00:02:35.526 CC module/bdev/raid/raid0.o 00:02:35.526 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:35.526 CC module/bdev/ftl/bdev_ftl.o 00:02:35.526 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:35.526 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:35.526 CC module/bdev/null/bdev_null.o 00:02:35.526 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:35.526 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:35.526 CC module/bdev/null/bdev_null_rpc.o 00:02:35.526 CC module/blobfs/bdev/blobfs_bdev.o 00:02:35.526 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:35.526 CC module/bdev/error/vbdev_error.o 00:02:35.526 CC module/bdev/error/vbdev_error_rpc.o 00:02:35.783 LIB libspdk_bdev_gpt.a 00:02:35.783 LIB libspdk_bdev_passthru.a 00:02:35.783 LIB libspdk_bdev_iscsi.a 00:02:35.783 LIB libspdk_bdev_null.a 00:02:35.783 LIB libspdk_blobfs_bdev.a 00:02:35.783 LIB libspdk_bdev_aio.a 00:02:35.783 LIB libspdk_bdev_ftl.a 00:02:35.783 LIB libspdk_bdev_zone_block.a 00:02:35.783 LIB libspdk_bdev_split.a 00:02:35.783 LIB libspdk_bdev_delay.a 00:02:35.783 LIB libspdk_bdev_malloc.a 00:02:36.042 LIB libspdk_bdev_error.a 00:02:36.042 LIB libspdk_bdev_virtio.a 00:02:36.042 LIB libspdk_bdev_lvol.a 00:02:36.300 LIB libspdk_bdev_raid.a 00:02:37.236 LIB libspdk_bdev_nvme.a 00:02:37.495 CC module/event/subsystems/keyring/keyring.o 00:02:37.495 CC module/event/subsystems/iobuf/iobuf.o 00:02:37.495 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:37.495 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:37.495 CC module/event/subsystems/vmd/vmd.o 00:02:37.495 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:37.495 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:37.495 CC module/event/subsystems/scheduler/scheduler.o 00:02:37.495 CC module/event/subsystems/fsdev/fsdev.o 00:02:37.495 CC module/event/subsystems/sock/sock.o 00:02:37.753 LIB libspdk_event_keyring.a 00:02:37.753 LIB libspdk_event_vfu_tgt.a 00:02:37.753 LIB libspdk_event_vmd.a 00:02:37.753 LIB libspdk_event_scheduler.a 00:02:37.753 LIB libspdk_event_iobuf.a 00:02:37.753 LIB libspdk_event_vhost_blk.a 00:02:37.753 LIB libspdk_event_sock.a 00:02:37.753 LIB libspdk_event_fsdev.a 00:02:38.010 CC module/event/subsystems/accel/accel.o 00:02:38.010 LIB libspdk_event_accel.a 00:02:38.269 CC module/event/subsystems/bdev/bdev.o 00:02:38.527 LIB libspdk_event_bdev.a 00:02:38.785 CC module/event/subsystems/nbd/nbd.o 00:02:38.785 CC module/event/subsystems/ublk/ublk.o 00:02:38.785 CC module/event/subsystems/scsi/scsi.o 00:02:38.785 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:38.785 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:38.785 LIB libspdk_event_nbd.a 00:02:38.785 LIB libspdk_event_ublk.a 00:02:38.785 LIB libspdk_event_scsi.a 00:02:38.785 LIB libspdk_event_nvmf.a 00:02:39.055 CC module/event/subsystems/iscsi/iscsi.o 00:02:39.055 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:39.318 LIB libspdk_event_iscsi.a 00:02:39.318 LIB libspdk_event_vhost_scsi.a 00:02:39.578 CC app/spdk_nvme_perf/perf.o 00:02:39.578 CXX app/trace/trace.o 00:02:39.578 TEST_HEADER include/spdk/accel.h 00:02:39.578 TEST_HEADER include/spdk/assert.h 00:02:39.578 TEST_HEADER include/spdk/barrier.h 00:02:39.578 TEST_HEADER include/spdk/accel_module.h 00:02:39.578 TEST_HEADER include/spdk/base64.h 00:02:39.578 TEST_HEADER include/spdk/bdev.h 00:02:39.578 TEST_HEADER include/spdk/bdev_zone.h 00:02:39.578 TEST_HEADER include/spdk/bdev_module.h 00:02:39.578 TEST_HEADER include/spdk/bit_pool.h 00:02:39.578 TEST_HEADER include/spdk/bit_array.h 00:02:39.578 CC app/trace_record/trace_record.o 00:02:39.578 CC app/spdk_nvme_identify/identify.o 00:02:39.578 CC app/spdk_lspci/spdk_lspci.o 00:02:39.578 TEST_HEADER include/spdk/blob_bdev.h 00:02:39.578 TEST_HEADER include/spdk/blobfs.h 00:02:39.578 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:39.578 TEST_HEADER include/spdk/blob.h 00:02:39.578 TEST_HEADER include/spdk/conf.h 00:02:39.578 TEST_HEADER include/spdk/config.h 00:02:39.578 TEST_HEADER include/spdk/cpuset.h 00:02:39.578 TEST_HEADER include/spdk/crc16.h 00:02:39.578 TEST_HEADER include/spdk/crc64.h 00:02:39.578 TEST_HEADER include/spdk/crc32.h 00:02:39.578 TEST_HEADER include/spdk/dma.h 00:02:39.578 TEST_HEADER include/spdk/dif.h 00:02:39.578 TEST_HEADER include/spdk/endian.h 00:02:39.578 TEST_HEADER include/spdk/env_dpdk.h 00:02:39.578 TEST_HEADER include/spdk/env.h 00:02:39.578 CC test/rpc_client/rpc_client_test.o 00:02:39.578 TEST_HEADER include/spdk/fd.h 00:02:39.578 TEST_HEADER include/spdk/event.h 00:02:39.578 TEST_HEADER include/spdk/fd_group.h 00:02:39.578 TEST_HEADER include/spdk/file.h 00:02:39.578 TEST_HEADER include/spdk/fsdev_module.h 00:02:39.578 TEST_HEADER include/spdk/fsdev.h 00:02:39.578 TEST_HEADER include/spdk/ftl.h 00:02:39.578 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:39.578 TEST_HEADER include/spdk/gpt_spec.h 00:02:39.578 TEST_HEADER include/spdk/hexlify.h 00:02:39.578 TEST_HEADER include/spdk/histogram_data.h 00:02:39.578 TEST_HEADER include/spdk/idxd.h 00:02:39.578 CC app/spdk_top/spdk_top.o 00:02:39.578 TEST_HEADER include/spdk/init.h 00:02:39.578 TEST_HEADER include/spdk/idxd_spec.h 00:02:39.578 TEST_HEADER include/spdk/ioat.h 00:02:39.578 CC app/spdk_nvme_discover/discovery_aer.o 00:02:39.578 TEST_HEADER include/spdk/ioat_spec.h 00:02:39.578 TEST_HEADER include/spdk/iscsi_spec.h 00:02:39.578 TEST_HEADER include/spdk/jsonrpc.h 00:02:39.578 TEST_HEADER include/spdk/json.h 00:02:39.578 TEST_HEADER include/spdk/keyring.h 00:02:39.578 TEST_HEADER include/spdk/keyring_module.h 00:02:39.578 TEST_HEADER include/spdk/likely.h 00:02:39.578 TEST_HEADER include/spdk/log.h 00:02:39.578 TEST_HEADER include/spdk/lvol.h 00:02:39.578 TEST_HEADER include/spdk/memory.h 00:02:39.578 TEST_HEADER include/spdk/md5.h 00:02:39.578 TEST_HEADER include/spdk/mmio.h 00:02:39.578 TEST_HEADER include/spdk/nbd.h 00:02:39.578 TEST_HEADER include/spdk/net.h 00:02:39.578 TEST_HEADER include/spdk/notify.h 00:02:39.578 TEST_HEADER include/spdk/nvme.h 00:02:39.578 TEST_HEADER include/spdk/nvme_intel.h 00:02:39.578 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:39.578 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:39.578 TEST_HEADER include/spdk/nvme_spec.h 00:02:39.578 TEST_HEADER include/spdk/nvme_zns.h 00:02:39.578 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:39.578 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:39.578 TEST_HEADER include/spdk/nvmf.h 00:02:39.578 TEST_HEADER include/spdk/nvmf_spec.h 00:02:39.578 TEST_HEADER include/spdk/nvmf_transport.h 00:02:39.578 TEST_HEADER include/spdk/opal_spec.h 00:02:39.578 TEST_HEADER include/spdk/opal.h 00:02:39.578 TEST_HEADER include/spdk/pci_ids.h 00:02:39.578 TEST_HEADER include/spdk/pipe.h 00:02:39.578 TEST_HEADER include/spdk/queue.h 00:02:39.578 TEST_HEADER include/spdk/reduce.h 00:02:39.578 TEST_HEADER include/spdk/rpc.h 00:02:39.578 TEST_HEADER include/spdk/scheduler.h 00:02:39.578 TEST_HEADER include/spdk/scsi.h 00:02:39.578 TEST_HEADER include/spdk/sock.h 00:02:39.578 TEST_HEADER include/spdk/scsi_spec.h 00:02:39.578 TEST_HEADER include/spdk/stdinc.h 00:02:39.578 TEST_HEADER include/spdk/string.h 00:02:39.578 TEST_HEADER include/spdk/thread.h 00:02:39.578 CC app/nvmf_tgt/nvmf_main.o 00:02:39.578 TEST_HEADER include/spdk/trace.h 00:02:39.578 TEST_HEADER include/spdk/trace_parser.h 00:02:39.578 TEST_HEADER include/spdk/tree.h 00:02:39.578 TEST_HEADER include/spdk/ublk.h 00:02:39.578 TEST_HEADER include/spdk/util.h 00:02:39.578 TEST_HEADER include/spdk/uuid.h 00:02:39.578 TEST_HEADER include/spdk/version.h 00:02:39.578 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:39.578 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:39.578 TEST_HEADER include/spdk/vhost.h 00:02:39.578 TEST_HEADER include/spdk/xor.h 00:02:39.578 TEST_HEADER include/spdk/zipf.h 00:02:39.578 CC app/iscsi_tgt/iscsi_tgt.o 00:02:39.578 TEST_HEADER include/spdk/vmd.h 00:02:39.578 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:39.578 CXX test/cpp_headers/accel.o 00:02:39.578 CXX test/cpp_headers/assert.o 00:02:39.578 CXX test/cpp_headers/accel_module.o 00:02:39.578 CXX test/cpp_headers/barrier.o 00:02:39.578 CXX test/cpp_headers/base64.o 00:02:39.578 CXX test/cpp_headers/bdev.o 00:02:39.578 CXX test/cpp_headers/bdev_module.o 00:02:39.578 CC app/spdk_dd/spdk_dd.o 00:02:39.578 CXX test/cpp_headers/bdev_zone.o 00:02:39.578 CXX test/cpp_headers/bit_array.o 00:02:39.578 CXX test/cpp_headers/bit_pool.o 00:02:39.578 CXX test/cpp_headers/blobfs_bdev.o 00:02:39.578 CXX test/cpp_headers/blob_bdev.o 00:02:39.578 CXX test/cpp_headers/blobfs.o 00:02:39.578 CXX test/cpp_headers/blob.o 00:02:39.578 CXX test/cpp_headers/conf.o 00:02:39.578 CXX test/cpp_headers/cpuset.o 00:02:39.578 CXX test/cpp_headers/config.o 00:02:39.578 CXX test/cpp_headers/crc16.o 00:02:39.578 CXX test/cpp_headers/crc32.o 00:02:39.578 CXX test/cpp_headers/crc64.o 00:02:39.578 CXX test/cpp_headers/dif.o 00:02:39.578 CXX test/cpp_headers/dma.o 00:02:39.578 CXX test/cpp_headers/endian.o 00:02:39.578 CXX test/cpp_headers/env_dpdk.o 00:02:39.578 CXX test/cpp_headers/env.o 00:02:39.578 CXX test/cpp_headers/event.o 00:02:39.579 CXX test/cpp_headers/fd_group.o 00:02:39.579 CXX test/cpp_headers/fd.o 00:02:39.579 CXX test/cpp_headers/file.o 00:02:39.579 CXX test/cpp_headers/fsdev.o 00:02:39.579 CXX test/cpp_headers/fsdev_module.o 00:02:39.579 CXX test/cpp_headers/ftl.o 00:02:39.579 CXX test/cpp_headers/fuse_dispatcher.o 00:02:39.579 CXX test/cpp_headers/gpt_spec.o 00:02:39.579 CXX test/cpp_headers/hexlify.o 00:02:39.579 CXX test/cpp_headers/histogram_data.o 00:02:39.579 CXX test/cpp_headers/idxd.o 00:02:39.579 CXX test/cpp_headers/idxd_spec.o 00:02:39.579 CC test/env/pci/pci_ut.o 00:02:39.579 CXX test/cpp_headers/init.o 00:02:39.579 CXX test/cpp_headers/ioat.o 00:02:39.579 CXX test/cpp_headers/ioat_spec.o 00:02:39.579 CC app/spdk_tgt/spdk_tgt.o 00:02:39.579 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:39.579 CC test/env/memory/memory_ut.o 00:02:39.579 CC test/env/vtophys/vtophys.o 00:02:39.579 CC test/thread/lock/spdk_lock.o 00:02:39.579 CC test/app/jsoncat/jsoncat.o 00:02:39.579 CC app/fio/nvme/fio_plugin.o 00:02:39.579 CC test/thread/poller_perf/poller_perf.o 00:02:39.579 CC test/app/histogram_perf/histogram_perf.o 00:02:39.579 CC test/app/stub/stub.o 00:02:39.579 CC examples/util/zipf/zipf.o 00:02:39.579 CC examples/ioat/verify/verify.o 00:02:39.579 CC examples/ioat/perf/perf.o 00:02:39.839 CC test/dma/test_dma/test_dma.o 00:02:39.839 CC app/fio/bdev/fio_plugin.o 00:02:39.839 CC test/app/bdev_svc/bdev_svc.o 00:02:39.839 LINK spdk_lspci 00:02:39.839 CC test/env/mem_callbacks/mem_callbacks.o 00:02:39.839 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:39.839 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:39.839 LINK rpc_client_test 00:02:39.839 LINK spdk_nvme_discover 00:02:39.839 LINK vtophys 00:02:39.839 LINK jsoncat 00:02:39.839 CXX test/cpp_headers/iscsi_spec.o 00:02:39.839 CXX test/cpp_headers/json.o 00:02:39.839 CXX test/cpp_headers/jsonrpc.o 00:02:39.839 CXX test/cpp_headers/keyring.o 00:02:39.839 CXX test/cpp_headers/keyring_module.o 00:02:39.839 CXX test/cpp_headers/likely.o 00:02:39.839 CXX test/cpp_headers/log.o 00:02:39.839 CXX test/cpp_headers/lvol.o 00:02:39.839 CXX test/cpp_headers/md5.o 00:02:39.839 CXX test/cpp_headers/memory.o 00:02:39.839 LINK histogram_perf 00:02:39.839 CXX test/cpp_headers/mmio.o 00:02:39.839 CXX test/cpp_headers/nbd.o 00:02:39.839 CXX test/cpp_headers/net.o 00:02:39.839 CXX test/cpp_headers/notify.o 00:02:39.839 CXX test/cpp_headers/nvme.o 00:02:39.839 LINK poller_perf 00:02:39.839 CXX test/cpp_headers/nvme_intel.o 00:02:39.839 LINK env_dpdk_post_init 00:02:39.839 CXX test/cpp_headers/nvme_ocssd.o 00:02:39.839 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:39.839 CXX test/cpp_headers/nvme_spec.o 00:02:39.839 CXX test/cpp_headers/nvme_zns.o 00:02:39.839 CXX test/cpp_headers/nvmf_cmd.o 00:02:39.839 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:39.839 CXX test/cpp_headers/nvmf.o 00:02:39.839 CXX test/cpp_headers/nvmf_spec.o 00:02:39.839 CXX test/cpp_headers/nvmf_transport.o 00:02:39.839 CXX test/cpp_headers/opal.o 00:02:39.839 CXX test/cpp_headers/opal_spec.o 00:02:39.839 CXX test/cpp_headers/pci_ids.o 00:02:39.839 LINK zipf 00:02:39.839 CXX test/cpp_headers/pipe.o 00:02:39.839 CXX test/cpp_headers/queue.o 00:02:39.839 LINK interrupt_tgt 00:02:39.839 CXX test/cpp_headers/reduce.o 00:02:39.839 LINK spdk_trace_record 00:02:39.839 CXX test/cpp_headers/rpc.o 00:02:39.839 CXX test/cpp_headers/scheduler.o 00:02:39.839 CXX test/cpp_headers/scsi.o 00:02:39.839 LINK nvmf_tgt 00:02:39.839 CXX test/cpp_headers/scsi_spec.o 00:02:39.839 CXX test/cpp_headers/sock.o 00:02:39.839 CXX test/cpp_headers/stdinc.o 00:02:39.839 CXX test/cpp_headers/string.o 00:02:39.839 LINK stub 00:02:39.839 CXX test/cpp_headers/thread.o 00:02:39.839 LINK iscsi_tgt 00:02:39.839 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:39.839 LINK verify 00:02:39.839 LINK ioat_perf 00:02:39.839 LINK spdk_tgt 00:02:40.099 LINK bdev_svc 00:02:40.099 CC test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.o 00:02:40.099 CC test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.o 00:02:40.099 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:40.099 LINK spdk_trace 00:02:40.099 CXX test/cpp_headers/trace.o 00:02:40.099 CXX test/cpp_headers/trace_parser.o 00:02:40.099 CXX test/cpp_headers/tree.o 00:02:40.099 CXX test/cpp_headers/ublk.o 00:02:40.099 CXX test/cpp_headers/util.o 00:02:40.099 CXX test/cpp_headers/uuid.o 00:02:40.099 CXX test/cpp_headers/version.o 00:02:40.099 CXX test/cpp_headers/vfio_user_pci.o 00:02:40.099 CXX test/cpp_headers/vfio_user_spec.o 00:02:40.099 CXX test/cpp_headers/vhost.o 00:02:40.099 CXX test/cpp_headers/vmd.o 00:02:40.099 CXX test/cpp_headers/xor.o 00:02:40.099 CXX test/cpp_headers/zipf.o 00:02:40.099 LINK pci_ut 00:02:40.099 LINK nvme_fuzz 00:02:40.099 LINK spdk_dd 00:02:40.099 LINK test_dma 00:02:40.357 LINK spdk_nvme 00:02:40.357 LINK llvm_vfio_fuzz 00:02:40.357 LINK spdk_nvme_identify 00:02:40.357 LINK spdk_bdev 00:02:40.357 LINK mem_callbacks 00:02:40.357 CC examples/idxd/perf/perf.o 00:02:40.357 LINK spdk_nvme_perf 00:02:40.357 LINK spdk_top 00:02:40.616 CC examples/vmd/lsvmd/lsvmd.o 00:02:40.616 CC examples/vmd/led/led.o 00:02:40.616 CC examples/sock/hello_world/hello_sock.o 00:02:40.616 LINK vhost_fuzz 00:02:40.616 CC examples/thread/thread/thread_ex.o 00:02:40.616 LINK llvm_nvme_fuzz 00:02:40.616 CC app/vhost/vhost.o 00:02:40.616 LINK lsvmd 00:02:40.616 LINK led 00:02:40.616 LINK hello_sock 00:02:40.616 LINK idxd_perf 00:02:40.616 LINK thread 00:02:40.873 LINK vhost 00:02:40.873 LINK memory_ut 00:02:40.873 LINK spdk_lock 00:02:41.132 LINK iscsi_fuzz 00:02:41.389 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:41.389 CC examples/nvme/arbitration/arbitration.o 00:02:41.389 CC examples/nvme/hotplug/hotplug.o 00:02:41.389 CC examples/nvme/hello_world/hello_world.o 00:02:41.389 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:41.389 CC examples/nvme/abort/abort.o 00:02:41.389 CC examples/nvme/reconnect/reconnect.o 00:02:41.389 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:41.389 CC test/event/event_perf/event_perf.o 00:02:41.389 CC test/event/reactor_perf/reactor_perf.o 00:02:41.389 CC test/event/app_repeat/app_repeat.o 00:02:41.646 CC test/event/reactor/reactor.o 00:02:41.646 LINK pmr_persistence 00:02:41.646 LINK hello_world 00:02:41.646 LINK hotplug 00:02:41.646 LINK cmb_copy 00:02:41.646 CC test/event/scheduler/scheduler.o 00:02:41.646 LINK event_perf 00:02:41.646 LINK reactor_perf 00:02:41.646 LINK reactor 00:02:41.646 LINK arbitration 00:02:41.646 LINK reconnect 00:02:41.646 LINK abort 00:02:41.646 LINK app_repeat 00:02:41.646 LINK nvme_manage 00:02:41.905 LINK scheduler 00:02:41.905 CC test/nvme/reset/reset.o 00:02:41.905 CC test/nvme/aer/aer.o 00:02:41.905 CC test/nvme/compliance/nvme_compliance.o 00:02:41.905 CC test/nvme/cuse/cuse.o 00:02:41.905 CC test/nvme/boot_partition/boot_partition.o 00:02:41.905 CC test/nvme/e2edp/nvme_dp.o 00:02:41.905 CC test/nvme/fused_ordering/fused_ordering.o 00:02:41.905 CC test/nvme/sgl/sgl.o 00:02:41.905 CC test/nvme/err_injection/err_injection.o 00:02:41.905 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:41.905 CC test/nvme/simple_copy/simple_copy.o 00:02:41.905 CC test/nvme/startup/startup.o 00:02:41.905 CC test/nvme/overhead/overhead.o 00:02:41.905 CC test/nvme/connect_stress/connect_stress.o 00:02:41.905 CC test/nvme/fdp/fdp.o 00:02:41.905 CC test/nvme/reserve/reserve.o 00:02:41.905 CC test/accel/dif/dif.o 00:02:41.905 CC test/blobfs/mkfs/mkfs.o 00:02:41.905 CC test/lvol/esnap/esnap.o 00:02:41.905 LINK boot_partition 00:02:41.905 LINK err_injection 00:02:41.905 LINK doorbell_aers 00:02:41.905 LINK startup 00:02:41.905 LINK fused_ordering 00:02:41.905 LINK reserve 00:02:42.164 LINK simple_copy 00:02:42.164 LINK reset 00:02:42.164 LINK aer 00:02:42.164 LINK connect_stress 00:02:42.164 LINK nvme_dp 00:02:42.164 LINK sgl 00:02:42.164 LINK overhead 00:02:42.164 LINK fdp 00:02:42.164 LINK mkfs 00:02:42.164 LINK nvme_compliance 00:02:42.422 LINK dif 00:02:42.422 CC examples/accel/perf/accel_perf.o 00:02:42.680 CC examples/blob/cli/blobcli.o 00:02:42.680 CC examples/blob/hello_world/hello_blob.o 00:02:42.680 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:42.680 LINK hello_blob 00:02:42.680 LINK hello_fsdev 00:02:42.938 LINK accel_perf 00:02:42.938 LINK cuse 00:02:42.938 LINK blobcli 00:02:43.505 CC examples/bdev/bdevperf/bdevperf.o 00:02:43.505 CC examples/bdev/hello_world/hello_bdev.o 00:02:43.763 LINK hello_bdev 00:02:44.022 CC test/bdev/bdevio/bdevio.o 00:02:44.022 LINK bdevperf 00:02:44.280 LINK bdevio 00:02:45.214 LINK esnap 00:02:45.483 CC examples/nvmf/nvmf/nvmf.o 00:02:45.753 LINK nvmf 00:02:47.218 00:02:47.218 real 0m45.465s 00:02:47.218 user 6m53.663s 00:02:47.218 sys 2m17.857s 00:02:47.218 00:10:17 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:47.218 00:10:17 make -- common/autotest_common.sh@10 -- $ set +x 00:02:47.218 ************************************ 00:02:47.218 END TEST make 00:02:47.218 ************************************ 00:02:47.218 00:10:17 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:47.218 00:10:17 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:47.218 00:10:17 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:47.218 00:10:17 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:47.218 00:10:17 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:47.218 00:10:17 -- pm/common@44 -- $ pid=3762987 00:02:47.218 00:10:17 -- pm/common@50 -- $ kill -TERM 3762987 00:02:47.218 00:10:17 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:47.218 00:10:17 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:47.218 00:10:17 -- pm/common@44 -- $ pid=3762989 00:02:47.218 00:10:17 -- pm/common@50 -- $ kill -TERM 3762989 00:02:47.218 00:10:17 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:47.218 00:10:17 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:47.218 00:10:17 -- pm/common@44 -- $ pid=3762991 00:02:47.218 00:10:17 -- pm/common@50 -- $ kill -TERM 3762991 00:02:47.218 00:10:17 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:47.218 00:10:17 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:47.218 00:10:17 -- pm/common@44 -- $ pid=3763014 00:02:47.218 00:10:17 -- pm/common@50 -- $ sudo -E kill -TERM 3763014 00:02:47.218 00:10:17 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:02:47.218 00:10:17 -- common/autotest_common.sh@1681 -- # lcov --version 00:02:47.218 00:10:17 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:02:47.218 00:10:17 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:02:47.218 00:10:17 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:47.218 00:10:17 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:47.218 00:10:17 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:47.218 00:10:17 -- scripts/common.sh@336 -- # IFS=.-: 00:02:47.218 00:10:17 -- scripts/common.sh@336 -- # read -ra ver1 00:02:47.218 00:10:17 -- scripts/common.sh@337 -- # IFS=.-: 00:02:47.218 00:10:17 -- scripts/common.sh@337 -- # read -ra ver2 00:02:47.218 00:10:17 -- scripts/common.sh@338 -- # local 'op=<' 00:02:47.218 00:10:17 -- scripts/common.sh@340 -- # ver1_l=2 00:02:47.218 00:10:17 -- scripts/common.sh@341 -- # ver2_l=1 00:02:47.218 00:10:17 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:47.218 00:10:17 -- scripts/common.sh@344 -- # case "$op" in 00:02:47.218 00:10:17 -- scripts/common.sh@345 -- # : 1 00:02:47.218 00:10:17 -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:47.218 00:10:17 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:47.218 00:10:17 -- scripts/common.sh@365 -- # decimal 1 00:02:47.218 00:10:17 -- scripts/common.sh@353 -- # local d=1 00:02:47.218 00:10:17 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:47.218 00:10:17 -- scripts/common.sh@355 -- # echo 1 00:02:47.218 00:10:17 -- scripts/common.sh@365 -- # ver1[v]=1 00:02:47.218 00:10:17 -- scripts/common.sh@366 -- # decimal 2 00:02:47.218 00:10:17 -- scripts/common.sh@353 -- # local d=2 00:02:47.218 00:10:17 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:47.218 00:10:17 -- scripts/common.sh@355 -- # echo 2 00:02:47.218 00:10:17 -- scripts/common.sh@366 -- # ver2[v]=2 00:02:47.218 00:10:17 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:47.218 00:10:17 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:47.218 00:10:17 -- scripts/common.sh@368 -- # return 0 00:02:47.218 00:10:17 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:47.218 00:10:17 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:02:47.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:47.218 --rc genhtml_branch_coverage=1 00:02:47.218 --rc genhtml_function_coverage=1 00:02:47.218 --rc genhtml_legend=1 00:02:47.218 --rc geninfo_all_blocks=1 00:02:47.218 --rc geninfo_unexecuted_blocks=1 00:02:47.218 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:47.218 ' 00:02:47.218 00:10:17 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:02:47.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:47.219 --rc genhtml_branch_coverage=1 00:02:47.219 --rc genhtml_function_coverage=1 00:02:47.219 --rc genhtml_legend=1 00:02:47.219 --rc geninfo_all_blocks=1 00:02:47.219 --rc geninfo_unexecuted_blocks=1 00:02:47.219 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:47.219 ' 00:02:47.219 00:10:17 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:02:47.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:47.219 --rc genhtml_branch_coverage=1 00:02:47.219 --rc genhtml_function_coverage=1 00:02:47.219 --rc genhtml_legend=1 00:02:47.219 --rc geninfo_all_blocks=1 00:02:47.219 --rc geninfo_unexecuted_blocks=1 00:02:47.219 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:47.219 ' 00:02:47.219 00:10:17 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:02:47.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:47.219 --rc genhtml_branch_coverage=1 00:02:47.219 --rc genhtml_function_coverage=1 00:02:47.219 --rc genhtml_legend=1 00:02:47.219 --rc geninfo_all_blocks=1 00:02:47.219 --rc geninfo_unexecuted_blocks=1 00:02:47.219 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:47.219 ' 00:02:47.219 00:10:17 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:02:47.219 00:10:17 -- nvmf/common.sh@7 -- # uname -s 00:02:47.219 00:10:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:47.219 00:10:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:47.219 00:10:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:47.219 00:10:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:47.219 00:10:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:47.219 00:10:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:47.219 00:10:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:47.219 00:10:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:47.219 00:10:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:47.219 00:10:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:47.219 00:10:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8023d868-666a-e711-906e-0017a4403562 00:02:47.219 00:10:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=8023d868-666a-e711-906e-0017a4403562 00:02:47.219 00:10:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:47.219 00:10:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:47.219 00:10:17 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:02:47.219 00:10:17 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:47.219 00:10:17 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:02:47.219 00:10:17 -- scripts/common.sh@15 -- # shopt -s extglob 00:02:47.219 00:10:17 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:47.219 00:10:17 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:47.219 00:10:17 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:47.219 00:10:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:47.219 00:10:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:47.219 00:10:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:47.219 00:10:17 -- paths/export.sh@5 -- # export PATH 00:02:47.219 00:10:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:47.219 00:10:17 -- nvmf/common.sh@51 -- # : 0 00:02:47.219 00:10:17 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:02:47.219 00:10:17 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:02:47.219 00:10:17 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:47.219 00:10:17 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:47.219 00:10:17 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:47.219 00:10:17 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:02:47.219 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:02:47.219 00:10:17 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:02:47.219 00:10:17 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:02:47.219 00:10:17 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:02:47.219 00:10:17 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:47.219 00:10:17 -- spdk/autotest.sh@32 -- # uname -s 00:02:47.219 00:10:17 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:47.219 00:10:17 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:47.219 00:10:17 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:02:47.219 00:10:17 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:47.219 00:10:17 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:02:47.219 00:10:17 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:47.219 00:10:17 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:47.219 00:10:17 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:47.219 00:10:17 -- spdk/autotest.sh@48 -- # udevadm_pid=3822276 00:02:47.219 00:10:17 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:47.219 00:10:17 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:47.219 00:10:17 -- pm/common@17 -- # local monitor 00:02:47.219 00:10:17 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:47.219 00:10:17 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:47.219 00:10:17 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:47.219 00:10:17 -- pm/common@21 -- # date +%s 00:02:47.219 00:10:17 -- pm/common@21 -- # date +%s 00:02:47.219 00:10:17 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:47.219 00:10:17 -- pm/common@21 -- # date +%s 00:02:47.219 00:10:17 -- pm/common@25 -- # sleep 1 00:02:47.219 00:10:17 -- pm/common@21 -- # date +%s 00:02:47.219 00:10:17 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728425417 00:02:47.219 00:10:17 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728425417 00:02:47.219 00:10:17 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728425417 00:02:47.219 00:10:17 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728425417 00:02:47.479 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728425417_collect-cpu-temp.pm.log 00:02:47.479 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728425417_collect-vmstat.pm.log 00:02:47.479 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728425417_collect-cpu-load.pm.log 00:02:47.479 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728425417_collect-bmc-pm.bmc.pm.log 00:02:48.416 00:10:18 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:48.416 00:10:18 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:48.416 00:10:18 -- common/autotest_common.sh@724 -- # xtrace_disable 00:02:48.416 00:10:18 -- common/autotest_common.sh@10 -- # set +x 00:02:48.416 00:10:18 -- spdk/autotest.sh@59 -- # create_test_list 00:02:48.416 00:10:18 -- common/autotest_common.sh@748 -- # xtrace_disable 00:02:48.416 00:10:18 -- common/autotest_common.sh@10 -- # set +x 00:02:48.416 00:10:18 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/autotest.sh 00:02:48.416 00:10:18 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:48.416 00:10:18 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:48.416 00:10:18 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:02:48.416 00:10:18 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:48.416 00:10:18 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:48.416 00:10:18 -- common/autotest_common.sh@1455 -- # uname 00:02:48.416 00:10:18 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:48.416 00:10:18 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:48.416 00:10:18 -- common/autotest_common.sh@1475 -- # uname 00:02:48.416 00:10:18 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:48.416 00:10:18 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:02:48.416 00:10:18 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh --version 00:02:48.416 lcov: LCOV version 1.15 00:02:48.416 00:10:18 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_base.info 00:02:53.686 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/mdns_server.gcno 00:03:00.285 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:04.477 00:10:34 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:04.477 00:10:34 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:04.477 00:10:34 -- common/autotest_common.sh@10 -- # set +x 00:03:04.477 00:10:34 -- spdk/autotest.sh@78 -- # rm -f 00:03:04.477 00:10:34 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:07.765 0000:1a:00.0 (8086 0a54): Already using the nvme driver 00:03:08.023 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:08.023 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:08.023 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:08.023 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:08.023 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:08.023 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:08.023 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:08.023 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:08.023 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:08.281 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:08.281 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:08.281 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:08.281 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:08.281 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:08.281 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:08.281 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:10.821 00:10:40 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:10.821 00:10:40 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:10.821 00:10:40 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:10.821 00:10:40 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:10.821 00:10:40 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:10.821 00:10:40 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:10.821 00:10:40 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:10.821 00:10:40 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:10.821 00:10:40 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:10.821 00:10:40 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:10.821 00:10:40 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:10.821 00:10:40 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:10.821 00:10:40 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:10.821 00:10:40 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:10.821 00:10:40 -- scripts/common.sh@390 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:10.821 No valid GPT data, bailing 00:03:10.821 00:10:41 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:10.821 00:10:41 -- scripts/common.sh@394 -- # pt= 00:03:10.821 00:10:41 -- scripts/common.sh@395 -- # return 1 00:03:10.821 00:10:41 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:10.821 1+0 records in 00:03:10.821 1+0 records out 00:03:10.821 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00203706 s, 515 MB/s 00:03:10.821 00:10:41 -- spdk/autotest.sh@105 -- # sync 00:03:10.821 00:10:41 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:10.821 00:10:41 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:10.821 00:10:41 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:16.094 00:10:46 -- spdk/autotest.sh@111 -- # uname -s 00:03:16.094 00:10:46 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:16.094 00:10:46 -- spdk/autotest.sh@111 -- # [[ 1 -eq 1 ]] 00:03:16.094 00:10:46 -- spdk/autotest.sh@112 -- # run_test setup.sh /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:03:16.094 00:10:46 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:16.094 00:10:46 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:16.094 00:10:46 -- common/autotest_common.sh@10 -- # set +x 00:03:16.094 ************************************ 00:03:16.094 START TEST setup.sh 00:03:16.094 ************************************ 00:03:16.094 00:10:46 setup.sh -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:03:16.354 * Looking for test storage... 00:03:16.354 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:16.354 00:10:46 setup.sh -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:03:16.354 00:10:46 setup.sh -- common/autotest_common.sh@1681 -- # lcov --version 00:03:16.354 00:10:46 setup.sh -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:03:16.354 00:10:46 setup.sh -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:03:16.354 00:10:46 setup.sh -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:16.354 00:10:46 setup.sh -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:16.354 00:10:46 setup.sh -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:16.354 00:10:46 setup.sh -- scripts/common.sh@336 -- # IFS=.-: 00:03:16.354 00:10:46 setup.sh -- scripts/common.sh@336 -- # read -ra ver1 00:03:16.354 00:10:46 setup.sh -- scripts/common.sh@337 -- # IFS=.-: 00:03:16.354 00:10:46 setup.sh -- scripts/common.sh@337 -- # read -ra ver2 00:03:16.354 00:10:46 setup.sh -- scripts/common.sh@338 -- # local 'op=<' 00:03:16.354 00:10:46 setup.sh -- scripts/common.sh@340 -- # ver1_l=2 00:03:16.354 00:10:46 setup.sh -- scripts/common.sh@341 -- # ver2_l=1 00:03:16.354 00:10:46 setup.sh -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:16.354 00:10:46 setup.sh -- scripts/common.sh@344 -- # case "$op" in 00:03:16.354 00:10:46 setup.sh -- scripts/common.sh@345 -- # : 1 00:03:16.354 00:10:46 setup.sh -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:16.354 00:10:46 setup.sh -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:16.354 00:10:46 setup.sh -- scripts/common.sh@365 -- # decimal 1 00:03:16.354 00:10:46 setup.sh -- scripts/common.sh@353 -- # local d=1 00:03:16.354 00:10:46 setup.sh -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:16.354 00:10:46 setup.sh -- scripts/common.sh@355 -- # echo 1 00:03:16.354 00:10:46 setup.sh -- scripts/common.sh@365 -- # ver1[v]=1 00:03:16.354 00:10:46 setup.sh -- scripts/common.sh@366 -- # decimal 2 00:03:16.354 00:10:46 setup.sh -- scripts/common.sh@353 -- # local d=2 00:03:16.354 00:10:46 setup.sh -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:16.354 00:10:46 setup.sh -- scripts/common.sh@355 -- # echo 2 00:03:16.354 00:10:46 setup.sh -- scripts/common.sh@366 -- # ver2[v]=2 00:03:16.354 00:10:46 setup.sh -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:16.354 00:10:46 setup.sh -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:16.354 00:10:46 setup.sh -- scripts/common.sh@368 -- # return 0 00:03:16.354 00:10:46 setup.sh -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:16.354 00:10:46 setup.sh -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:03:16.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:16.354 --rc genhtml_branch_coverage=1 00:03:16.354 --rc genhtml_function_coverage=1 00:03:16.354 --rc genhtml_legend=1 00:03:16.354 --rc geninfo_all_blocks=1 00:03:16.354 --rc geninfo_unexecuted_blocks=1 00:03:16.354 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:16.354 ' 00:03:16.354 00:10:46 setup.sh -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:03:16.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:16.354 --rc genhtml_branch_coverage=1 00:03:16.354 --rc genhtml_function_coverage=1 00:03:16.354 --rc genhtml_legend=1 00:03:16.354 --rc geninfo_all_blocks=1 00:03:16.354 --rc geninfo_unexecuted_blocks=1 00:03:16.354 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:16.354 ' 00:03:16.354 00:10:46 setup.sh -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:03:16.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:16.354 --rc genhtml_branch_coverage=1 00:03:16.354 --rc genhtml_function_coverage=1 00:03:16.354 --rc genhtml_legend=1 00:03:16.354 --rc geninfo_all_blocks=1 00:03:16.354 --rc geninfo_unexecuted_blocks=1 00:03:16.354 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:16.354 ' 00:03:16.354 00:10:46 setup.sh -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:03:16.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:16.354 --rc genhtml_branch_coverage=1 00:03:16.354 --rc genhtml_function_coverage=1 00:03:16.354 --rc genhtml_legend=1 00:03:16.354 --rc geninfo_all_blocks=1 00:03:16.354 --rc geninfo_unexecuted_blocks=1 00:03:16.354 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:16.354 ' 00:03:16.354 00:10:46 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:16.354 00:10:46 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:16.354 00:10:46 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:03:16.354 00:10:46 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:16.354 00:10:46 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:16.354 00:10:46 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:16.354 ************************************ 00:03:16.354 START TEST acl 00:03:16.354 ************************************ 00:03:16.354 00:10:46 setup.sh.acl -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:03:16.614 * Looking for test storage... 00:03:16.614 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:16.614 00:10:47 setup.sh.acl -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:03:16.614 00:10:47 setup.sh.acl -- common/autotest_common.sh@1681 -- # lcov --version 00:03:16.614 00:10:47 setup.sh.acl -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:03:16.614 00:10:47 setup.sh.acl -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:03:16.614 00:10:47 setup.sh.acl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:16.614 00:10:47 setup.sh.acl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:16.614 00:10:47 setup.sh.acl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:16.614 00:10:47 setup.sh.acl -- scripts/common.sh@336 -- # IFS=.-: 00:03:16.614 00:10:47 setup.sh.acl -- scripts/common.sh@336 -- # read -ra ver1 00:03:16.614 00:10:47 setup.sh.acl -- scripts/common.sh@337 -- # IFS=.-: 00:03:16.614 00:10:47 setup.sh.acl -- scripts/common.sh@337 -- # read -ra ver2 00:03:16.614 00:10:47 setup.sh.acl -- scripts/common.sh@338 -- # local 'op=<' 00:03:16.614 00:10:47 setup.sh.acl -- scripts/common.sh@340 -- # ver1_l=2 00:03:16.614 00:10:47 setup.sh.acl -- scripts/common.sh@341 -- # ver2_l=1 00:03:16.614 00:10:47 setup.sh.acl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:16.614 00:10:47 setup.sh.acl -- scripts/common.sh@344 -- # case "$op" in 00:03:16.614 00:10:47 setup.sh.acl -- scripts/common.sh@345 -- # : 1 00:03:16.614 00:10:47 setup.sh.acl -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:16.614 00:10:47 setup.sh.acl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:16.614 00:10:47 setup.sh.acl -- scripts/common.sh@365 -- # decimal 1 00:03:16.614 00:10:47 setup.sh.acl -- scripts/common.sh@353 -- # local d=1 00:03:16.614 00:10:47 setup.sh.acl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:16.614 00:10:47 setup.sh.acl -- scripts/common.sh@355 -- # echo 1 00:03:16.614 00:10:47 setup.sh.acl -- scripts/common.sh@365 -- # ver1[v]=1 00:03:16.614 00:10:47 setup.sh.acl -- scripts/common.sh@366 -- # decimal 2 00:03:16.614 00:10:47 setup.sh.acl -- scripts/common.sh@353 -- # local d=2 00:03:16.614 00:10:47 setup.sh.acl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:16.614 00:10:47 setup.sh.acl -- scripts/common.sh@355 -- # echo 2 00:03:16.614 00:10:47 setup.sh.acl -- scripts/common.sh@366 -- # ver2[v]=2 00:03:16.614 00:10:47 setup.sh.acl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:16.614 00:10:47 setup.sh.acl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:16.614 00:10:47 setup.sh.acl -- scripts/common.sh@368 -- # return 0 00:03:16.614 00:10:47 setup.sh.acl -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:16.614 00:10:47 setup.sh.acl -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:03:16.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:16.614 --rc genhtml_branch_coverage=1 00:03:16.614 --rc genhtml_function_coverage=1 00:03:16.614 --rc genhtml_legend=1 00:03:16.614 --rc geninfo_all_blocks=1 00:03:16.614 --rc geninfo_unexecuted_blocks=1 00:03:16.614 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:16.614 ' 00:03:16.614 00:10:47 setup.sh.acl -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:03:16.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:16.614 --rc genhtml_branch_coverage=1 00:03:16.614 --rc genhtml_function_coverage=1 00:03:16.614 --rc genhtml_legend=1 00:03:16.614 --rc geninfo_all_blocks=1 00:03:16.614 --rc geninfo_unexecuted_blocks=1 00:03:16.614 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:16.614 ' 00:03:16.614 00:10:47 setup.sh.acl -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:03:16.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:16.614 --rc genhtml_branch_coverage=1 00:03:16.614 --rc genhtml_function_coverage=1 00:03:16.614 --rc genhtml_legend=1 00:03:16.614 --rc geninfo_all_blocks=1 00:03:16.614 --rc geninfo_unexecuted_blocks=1 00:03:16.614 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:16.614 ' 00:03:16.614 00:10:47 setup.sh.acl -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:03:16.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:16.614 --rc genhtml_branch_coverage=1 00:03:16.614 --rc genhtml_function_coverage=1 00:03:16.614 --rc genhtml_legend=1 00:03:16.614 --rc geninfo_all_blocks=1 00:03:16.614 --rc geninfo_unexecuted_blocks=1 00:03:16.614 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:16.614 ' 00:03:16.614 00:10:47 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:16.614 00:10:47 setup.sh.acl -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:16.614 00:10:47 setup.sh.acl -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:16.614 00:10:47 setup.sh.acl -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:16.614 00:10:47 setup.sh.acl -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:16.614 00:10:47 setup.sh.acl -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:16.614 00:10:47 setup.sh.acl -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:16.614 00:10:47 setup.sh.acl -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:16.614 00:10:47 setup.sh.acl -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:16.614 00:10:47 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:16.614 00:10:47 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:16.614 00:10:47 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:16.614 00:10:47 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:16.614 00:10:47 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:16.614 00:10:47 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:16.614 00:10:47 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:23.197 00:10:53 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:23.197 00:10:53 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:23.197 00:10:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:23.197 00:10:53 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:23.197 00:10:53 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:23.197 00:10:53 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:03:25.734 Hugepages 00:03:25.734 node hugesize free / total 00:03:25.734 00:10:56 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:25.734 00:10:56 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:25.734 00:10:56 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.734 00:10:56 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:25.734 00:10:56 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:25.734 00:10:56 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.734 00:10:56 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:25.734 00:10:56 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:25.734 00:10:56 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.734 00:03:25.734 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:25.734 00:10:56 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:25.734 00:10:56 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:25.734 00:10:56 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.734 00:10:56 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:25.734 00:10:56 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.734 00:10:56 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.734 00:10:56 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.734 00:10:56 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:25.734 00:10:56 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.734 00:10:56 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.734 00:10:56 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.734 00:10:56 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:25.734 00:10:56 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.734 00:10:56 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.734 00:10:56 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.734 00:10:56 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:25.734 00:10:56 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.734 00:10:56 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.734 00:10:56 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.734 00:10:56 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:25.734 00:10:56 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.734 00:10:56 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.734 00:10:56 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.734 00:10:56 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:25.734 00:10:56 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.734 00:10:56 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.734 00:10:56 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.734 00:10:56 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:25.734 00:10:56 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.734 00:10:56 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.734 00:10:56 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.734 00:10:56 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:25.734 00:10:56 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.734 00:10:56 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.734 00:10:56 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.994 00:10:56 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:1a:00.0 == *:*:*.* ]] 00:03:25.994 00:10:56 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:25.994 00:10:56 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\1\a\:\0\0\.\0* ]] 00:03:25.994 00:10:56 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:25.994 00:10:56 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:25.994 00:10:56 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.994 00:10:56 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:25.994 00:10:56 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.994 00:10:56 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.994 00:10:56 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.994 00:10:56 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:25.994 00:10:56 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.994 00:10:56 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.994 00:10:56 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.994 00:10:56 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:25.994 00:10:56 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.994 00:10:56 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.994 00:10:56 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.994 00:10:56 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:25.994 00:10:56 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.994 00:10:56 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.994 00:10:56 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.994 00:10:56 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:25.994 00:10:56 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.994 00:10:56 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.994 00:10:56 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.994 00:10:56 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:25.994 00:10:56 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.994 00:10:56 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.994 00:10:56 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.994 00:10:56 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:25.994 00:10:56 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.994 00:10:56 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.994 00:10:56 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.994 00:10:56 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:25.994 00:10:56 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.994 00:10:56 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.994 00:10:56 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.994 00:10:56 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:25.994 00:10:56 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:25.994 00:10:56 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:25.994 00:10:56 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:25.994 00:10:56 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:25.994 ************************************ 00:03:25.994 START TEST denied 00:03:25.994 ************************************ 00:03:25.994 00:10:56 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # denied 00:03:25.994 00:10:56 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:1a:00.0' 00:03:25.994 00:10:56 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:25.994 00:10:56 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:1a:00.0' 00:03:25.994 00:10:56 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:25.994 00:10:56 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:32.583 0000:1a:00.0 (8086 0a54): Skipping denied controller at 0000:1a:00.0 00:03:32.583 00:11:02 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:1a:00.0 00:03:32.583 00:11:02 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:32.583 00:11:02 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:32.583 00:11:02 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:1a:00.0 ]] 00:03:32.583 00:11:02 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:1a:00.0/driver 00:03:32.583 00:11:02 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:32.583 00:11:02 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:32.583 00:11:02 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:32.583 00:11:02 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:32.583 00:11:02 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:39.150 00:03:39.150 real 0m12.211s 00:03:39.150 user 0m3.534s 00:03:39.150 sys 0m7.789s 00:03:39.150 00:11:08 setup.sh.acl.denied -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:39.150 00:11:08 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:39.150 ************************************ 00:03:39.150 END TEST denied 00:03:39.150 ************************************ 00:03:39.150 00:11:08 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:39.150 00:11:08 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:39.150 00:11:08 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:39.150 00:11:08 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:39.150 ************************************ 00:03:39.150 START TEST allowed 00:03:39.150 ************************************ 00:03:39.150 00:11:08 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # allowed 00:03:39.150 00:11:08 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:1a:00.0 00:03:39.150 00:11:08 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:39.150 00:11:08 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:1a:00.0 .*: nvme -> .*' 00:03:39.150 00:11:08 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:39.150 00:11:08 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:47.269 0000:1a:00.0 (8086 0a54): nvme -> vfio-pci 00:03:47.269 00:11:17 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:47.269 00:11:17 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:47.269 00:11:17 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:47.269 00:11:17 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:47.269 00:11:17 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:53.975 00:03:53.975 real 0m14.758s 00:03:53.975 user 0m3.643s 00:03:53.975 sys 0m7.678s 00:03:53.975 00:11:23 setup.sh.acl.allowed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:53.975 00:11:23 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:53.975 ************************************ 00:03:53.975 END TEST allowed 00:03:53.975 ************************************ 00:03:53.975 00:03:53.975 real 0m36.719s 00:03:53.975 user 0m10.646s 00:03:53.975 sys 0m21.943s 00:03:53.975 00:11:23 setup.sh.acl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:53.975 00:11:23 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:53.975 ************************************ 00:03:53.975 END TEST acl 00:03:53.975 ************************************ 00:03:53.975 00:11:23 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:03:53.975 00:11:23 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:53.975 00:11:23 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:53.975 00:11:23 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:53.975 ************************************ 00:03:53.975 START TEST hugepages 00:03:53.975 ************************************ 00:03:53.975 00:11:23 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:03:53.975 * Looking for test storage... 00:03:53.975 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:53.975 00:11:23 setup.sh.hugepages -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:03:53.975 00:11:23 setup.sh.hugepages -- common/autotest_common.sh@1681 -- # lcov --version 00:03:53.975 00:11:23 setup.sh.hugepages -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:03:53.975 00:11:23 setup.sh.hugepages -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:03:53.975 00:11:23 setup.sh.hugepages -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:53.975 00:11:23 setup.sh.hugepages -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:53.976 00:11:23 setup.sh.hugepages -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:53.976 00:11:23 setup.sh.hugepages -- scripts/common.sh@336 -- # IFS=.-: 00:03:53.976 00:11:23 setup.sh.hugepages -- scripts/common.sh@336 -- # read -ra ver1 00:03:53.976 00:11:23 setup.sh.hugepages -- scripts/common.sh@337 -- # IFS=.-: 00:03:53.976 00:11:23 setup.sh.hugepages -- scripts/common.sh@337 -- # read -ra ver2 00:03:53.976 00:11:23 setup.sh.hugepages -- scripts/common.sh@338 -- # local 'op=<' 00:03:53.976 00:11:23 setup.sh.hugepages -- scripts/common.sh@340 -- # ver1_l=2 00:03:53.976 00:11:23 setup.sh.hugepages -- scripts/common.sh@341 -- # ver2_l=1 00:03:53.976 00:11:23 setup.sh.hugepages -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:53.976 00:11:23 setup.sh.hugepages -- scripts/common.sh@344 -- # case "$op" in 00:03:53.976 00:11:23 setup.sh.hugepages -- scripts/common.sh@345 -- # : 1 00:03:53.976 00:11:23 setup.sh.hugepages -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:53.976 00:11:23 setup.sh.hugepages -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:53.976 00:11:23 setup.sh.hugepages -- scripts/common.sh@365 -- # decimal 1 00:03:53.976 00:11:23 setup.sh.hugepages -- scripts/common.sh@353 -- # local d=1 00:03:53.976 00:11:23 setup.sh.hugepages -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:53.976 00:11:23 setup.sh.hugepages -- scripts/common.sh@355 -- # echo 1 00:03:53.976 00:11:23 setup.sh.hugepages -- scripts/common.sh@365 -- # ver1[v]=1 00:03:53.976 00:11:23 setup.sh.hugepages -- scripts/common.sh@366 -- # decimal 2 00:03:53.976 00:11:23 setup.sh.hugepages -- scripts/common.sh@353 -- # local d=2 00:03:53.976 00:11:23 setup.sh.hugepages -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:53.976 00:11:23 setup.sh.hugepages -- scripts/common.sh@355 -- # echo 2 00:03:53.976 00:11:23 setup.sh.hugepages -- scripts/common.sh@366 -- # ver2[v]=2 00:03:53.976 00:11:23 setup.sh.hugepages -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:53.976 00:11:23 setup.sh.hugepages -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:53.976 00:11:23 setup.sh.hugepages -- scripts/common.sh@368 -- # return 0 00:03:53.976 00:11:23 setup.sh.hugepages -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:53.976 00:11:23 setup.sh.hugepages -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:03:53.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:53.976 --rc genhtml_branch_coverage=1 00:03:53.976 --rc genhtml_function_coverage=1 00:03:53.976 --rc genhtml_legend=1 00:03:53.976 --rc geninfo_all_blocks=1 00:03:53.976 --rc geninfo_unexecuted_blocks=1 00:03:53.976 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:53.976 ' 00:03:53.976 00:11:23 setup.sh.hugepages -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:03:53.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:53.976 --rc genhtml_branch_coverage=1 00:03:53.976 --rc genhtml_function_coverage=1 00:03:53.976 --rc genhtml_legend=1 00:03:53.976 --rc geninfo_all_blocks=1 00:03:53.976 --rc geninfo_unexecuted_blocks=1 00:03:53.976 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:53.976 ' 00:03:53.976 00:11:23 setup.sh.hugepages -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:03:53.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:53.976 --rc genhtml_branch_coverage=1 00:03:53.976 --rc genhtml_function_coverage=1 00:03:53.976 --rc genhtml_legend=1 00:03:53.976 --rc geninfo_all_blocks=1 00:03:53.976 --rc geninfo_unexecuted_blocks=1 00:03:53.976 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:53.976 ' 00:03:53.976 00:11:23 setup.sh.hugepages -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:03:53.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:53.976 --rc genhtml_branch_coverage=1 00:03:53.976 --rc genhtml_function_coverage=1 00:03:53.976 --rc genhtml_legend=1 00:03:53.976 --rc geninfo_all_blocks=1 00:03:53.976 --rc geninfo_unexecuted_blocks=1 00:03:53.976 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:53.976 ' 00:03:53.976 00:11:23 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:53.976 00:11:23 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:53.976 00:11:23 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:53.976 00:11:23 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:53.976 00:11:23 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:53.976 00:11:23 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:53.976 00:11:23 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:53.976 00:11:23 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:53.976 00:11:23 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:53.976 00:11:23 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:53.976 00:11:23 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.976 00:11:23 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:53.976 00:11:23 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:53.976 00:11:23 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.976 00:11:23 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.976 00:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.976 00:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.976 00:11:23 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285444 kB' 'MemFree: 71980488 kB' 'MemAvailable: 76180688 kB' 'Buffers: 9772 kB' 'Cached: 12523860 kB' 'SwapCached: 0 kB' 'Active: 8938716 kB' 'Inactive: 4107900 kB' 'Active(anon): 8516424 kB' 'Inactive(anon): 0 kB' 'Active(file): 422292 kB' 'Inactive(file): 4107900 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516368 kB' 'Mapped: 198996 kB' 'Shmem: 8003440 kB' 'KReclaimable: 505928 kB' 'Slab: 1112160 kB' 'SReclaimable: 505928 kB' 'SUnreclaim: 606232 kB' 'KernelStack: 17456 kB' 'PageTables: 8608 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52434172 kB' 'Committed_AS: 9848236 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213176 kB' 'VmallocChunk: 0 kB' 'Percpu: 69984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 486840 kB' 'DirectMap2M: 7577600 kB' 'DirectMap1G: 94371840 kB' 00:03:53.976 00:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.976 00:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.976 00:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.976 00:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.976 00:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.976 00:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.976 00:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.976 00:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.976 00:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.976 00:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.976 00:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.976 00:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.976 00:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.976 00:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.976 00:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.976 00:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.976 00:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.976 00:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.976 00:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.976 00:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.976 00:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.976 00:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.976 00:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.976 00:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.976 00:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.976 00:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.976 00:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.976 00:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.976 00:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.976 00:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.976 00:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.976 00:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.976 00:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.976 00:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.976 00:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.976 00:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.976 00:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.976 00:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.976 00:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.976 00:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.976 00:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.976 00:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.976 00:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.976 00:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.976 00:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.976 00:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.976 00:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.976 00:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.976 00:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.976 00:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.976 00:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.976 00:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.976 00:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.976 00:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.976 00:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.976 00:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.977 00:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.978 00:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.978 00:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.978 00:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.978 00:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.978 00:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.978 00:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.978 00:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.978 00:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.978 00:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.978 00:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.978 00:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.978 00:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.978 00:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.978 00:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.978 00:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.978 00:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.978 00:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.978 00:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.978 00:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.978 00:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.978 00:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.978 00:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.978 00:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.978 00:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.978 00:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.978 00:11:23 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:53.978 00:11:23 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:53.978 00:11:23 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:53.978 00:11:23 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:53.978 00:11:23 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:53.978 00:11:23 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGEMEM 00:03:53.978 00:11:23 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGENODE 00:03:53.978 00:11:23 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v NRHUGE 00:03:53.978 00:11:23 setup.sh.hugepages -- setup/hugepages.sh@197 -- # get_nodes 00:03:53.978 00:11:23 setup.sh.hugepages -- setup/hugepages.sh@26 -- # local node 00:03:53.978 00:11:23 setup.sh.hugepages -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:53.978 00:11:23 setup.sh.hugepages -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:03:53.978 00:11:23 setup.sh.hugepages -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:53.978 00:11:23 setup.sh.hugepages -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:03:53.978 00:11:23 setup.sh.hugepages -- setup/hugepages.sh@31 -- # no_nodes=2 00:03:53.978 00:11:23 setup.sh.hugepages -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:03:53.978 00:11:23 setup.sh.hugepages -- setup/hugepages.sh@198 -- # clear_hp 00:03:53.978 00:11:23 setup.sh.hugepages -- setup/hugepages.sh@36 -- # local node hp 00:03:53.978 00:11:23 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:03:53.978 00:11:23 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:53.978 00:11:23 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:03:53.978 00:11:23 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:53.978 00:11:23 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:03:53.978 00:11:23 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:03:53.978 00:11:23 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:53.978 00:11:23 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:03:53.978 00:11:23 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:53.978 00:11:23 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:03:53.978 00:11:23 setup.sh.hugepages -- setup/hugepages.sh@44 -- # export CLEAR_HUGE=yes 00:03:53.978 00:11:23 setup.sh.hugepages -- setup/hugepages.sh@44 -- # CLEAR_HUGE=yes 00:03:53.978 00:11:23 setup.sh.hugepages -- setup/hugepages.sh@200 -- # run_test single_node_setup single_node_setup 00:03:53.978 00:11:23 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:53.978 00:11:23 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:53.978 00:11:23 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:53.978 ************************************ 00:03:53.978 START TEST single_node_setup 00:03:53.978 ************************************ 00:03:53.978 00:11:23 setup.sh.hugepages.single_node_setup -- common/autotest_common.sh@1125 -- # single_node_setup 00:03:53.978 00:11:23 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@135 -- # get_test_nr_hugepages 2097152 0 00:03:53.978 00:11:23 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@48 -- # local size=2097152 00:03:53.978 00:11:23 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@49 -- # (( 2 > 1 )) 00:03:53.978 00:11:23 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@50 -- # shift 00:03:53.978 00:11:23 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@51 -- # node_ids=('0') 00:03:53.978 00:11:23 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@51 -- # local node_ids 00:03:53.978 00:11:23 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:03:53.978 00:11:23 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:03:53.978 00:11:23 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 0 00:03:53.978 00:11:23 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@61 -- # user_nodes=('0') 00:03:53.978 00:11:23 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@61 -- # local user_nodes 00:03:53.978 00:11:23 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:03:53.978 00:11:23 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:03:53.978 00:11:23 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@66 -- # nodes_test=() 00:03:53.978 00:11:23 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@66 -- # local -g nodes_test 00:03:53.978 00:11:23 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@68 -- # (( 1 > 0 )) 00:03:53.978 00:11:23 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@69 -- # for _no_nodes in "${user_nodes[@]}" 00:03:53.978 00:11:23 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@70 -- # nodes_test[_no_nodes]=1024 00:03:53.978 00:11:23 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@72 -- # return 0 00:03:53.978 00:11:23 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@136 -- # NRHUGE=1024 00:03:53.978 00:11:23 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@136 -- # HUGENODE=0 00:03:53.978 00:11:23 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@136 -- # setup output 00:03:53.978 00:11:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:53.978 00:11:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:57.273 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:57.273 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:57.273 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:57.273 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:57.273 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:57.273 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:57.273 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:57.273 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:57.273 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:57.273 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:57.273 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:57.273 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:57.273 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:57.273 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:57.273 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:57.273 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:00.573 0000:1a:00.0 (8086 0a54): nvme -> vfio-pci 00:04:02.476 00:11:33 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@137 -- # verify_nr_hugepages 00:04:02.476 00:11:33 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@88 -- # local node 00:04:02.476 00:11:33 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@89 -- # local sorted_t 00:04:02.476 00:11:33 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@90 -- # local sorted_s 00:04:02.476 00:11:33 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@91 -- # local surp 00:04:02.476 00:11:33 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@92 -- # local resv 00:04:02.476 00:11:33 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@93 -- # local anon 00:04:02.476 00:11:33 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:02.476 00:11:33 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:04:02.476 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:02.476 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:04:02.476 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:04:02.476 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:02.476 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.476 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.476 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.476 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.476 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.476 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.476 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285444 kB' 'MemFree: 74158432 kB' 'MemAvailable: 78358544 kB' 'Buffers: 9772 kB' 'Cached: 12524044 kB' 'SwapCached: 0 kB' 'Active: 8940356 kB' 'Inactive: 4107900 kB' 'Active(anon): 8518064 kB' 'Inactive(anon): 0 kB' 'Active(file): 422292 kB' 'Inactive(file): 4107900 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 518380 kB' 'Mapped: 198996 kB' 'Shmem: 8003624 kB' 'KReclaimable: 505840 kB' 'Slab: 1110840 kB' 'SReclaimable: 505840 kB' 'SUnreclaim: 605000 kB' 'KernelStack: 17392 kB' 'PageTables: 8380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482748 kB' 'Committed_AS: 9850220 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213096 kB' 'VmallocChunk: 0 kB' 'Percpu: 69984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 486840 kB' 'DirectMap2M: 7577600 kB' 'DirectMap1G: 94371840 kB' 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.477 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@96 -- # anon=0 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285444 kB' 'MemFree: 74161992 kB' 'MemAvailable: 78362104 kB' 'Buffers: 9772 kB' 'Cached: 12524052 kB' 'SwapCached: 0 kB' 'Active: 8940756 kB' 'Inactive: 4107900 kB' 'Active(anon): 8518464 kB' 'Inactive(anon): 0 kB' 'Active(file): 422292 kB' 'Inactive(file): 4107900 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 518808 kB' 'Mapped: 198880 kB' 'Shmem: 8003632 kB' 'KReclaimable: 505840 kB' 'Slab: 1110784 kB' 'SReclaimable: 505840 kB' 'SUnreclaim: 604944 kB' 'KernelStack: 17424 kB' 'PageTables: 8484 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482748 kB' 'Committed_AS: 9860404 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213080 kB' 'VmallocChunk: 0 kB' 'Percpu: 69984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 486840 kB' 'DirectMap2M: 7577600 kB' 'DirectMap1G: 94371840 kB' 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.478 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.479 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.479 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.479 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.479 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.479 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.479 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.479 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.479 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.479 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.479 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.479 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.479 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.479 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.479 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.479 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.479 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.479 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.479 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.479 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.479 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.479 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.479 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.479 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.479 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.479 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.479 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.479 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.479 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.479 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.479 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.479 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.479 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.479 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.479 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.479 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.479 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.479 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.479 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.479 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.479 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.479 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.479 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.479 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.479 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.479 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.479 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.479 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.479 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.479 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.479 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.479 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.479 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.479 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.479 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.479 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.479 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.479 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.479 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.479 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.479 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.479 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.479 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.479 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.479 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.479 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.479 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.479 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.479 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.479 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.479 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.479 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.479 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.479 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.479 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.479 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.479 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.479 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.479 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.479 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.479 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.479 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.479 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.739 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.739 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.739 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.739 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.739 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.739 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.739 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.739 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.739 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.739 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.739 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.739 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.739 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.739 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.739 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.739 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.739 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.739 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.739 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.739 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.739 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.739 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.739 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.739 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.739 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.739 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.739 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.739 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.739 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@98 -- # surp=0 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285444 kB' 'MemFree: 74161992 kB' 'MemAvailable: 78362104 kB' 'Buffers: 9772 kB' 'Cached: 12524052 kB' 'SwapCached: 0 kB' 'Active: 8941004 kB' 'Inactive: 4107900 kB' 'Active(anon): 8518712 kB' 'Inactive(anon): 0 kB' 'Active(file): 422292 kB' 'Inactive(file): 4107900 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519028 kB' 'Mapped: 198880 kB' 'Shmem: 8003632 kB' 'KReclaimable: 505840 kB' 'Slab: 1110784 kB' 'SReclaimable: 505840 kB' 'SUnreclaim: 604944 kB' 'KernelStack: 17424 kB' 'PageTables: 8472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482748 kB' 'Committed_AS: 9850632 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213064 kB' 'VmallocChunk: 0 kB' 'Percpu: 69984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 486840 kB' 'DirectMap2M: 7577600 kB' 'DirectMap1G: 94371840 kB' 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.740 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@99 -- # resv=0 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:04:02.741 nr_hugepages=1024 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:04:02.741 resv_hugepages=0 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:04:02.741 surplus_hugepages=0 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:04:02.741 anon_hugepages=0 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285444 kB' 'MemFree: 74162516 kB' 'MemAvailable: 78362628 kB' 'Buffers: 9772 kB' 'Cached: 12524080 kB' 'SwapCached: 0 kB' 'Active: 8941956 kB' 'Inactive: 4107900 kB' 'Active(anon): 8519664 kB' 'Inactive(anon): 0 kB' 'Active(file): 422292 kB' 'Inactive(file): 4107900 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519924 kB' 'Mapped: 199384 kB' 'Shmem: 8003660 kB' 'KReclaimable: 505840 kB' 'Slab: 1110784 kB' 'SReclaimable: 505840 kB' 'SUnreclaim: 604944 kB' 'KernelStack: 17408 kB' 'PageTables: 8428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482748 kB' 'Committed_AS: 9852144 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213048 kB' 'VmallocChunk: 0 kB' 'Percpu: 69984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 486840 kB' 'DirectMap2M: 7577600 kB' 'DirectMap1G: 94371840 kB' 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.741 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.742 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 1024 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@111 -- # get_nodes 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@26 -- # local node 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=0 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@31 -- # no_nodes=2 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node=0 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48064864 kB' 'MemFree: 34018236 kB' 'MemUsed: 14046628 kB' 'SwapCached: 0 kB' 'Active: 6386824 kB' 'Inactive: 3878016 kB' 'Active(anon): 6176592 kB' 'Inactive(anon): 0 kB' 'Active(file): 210232 kB' 'Inactive(file): 3878016 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10046352 kB' 'Mapped: 99780 kB' 'AnonPages: 222256 kB' 'Shmem: 5958104 kB' 'KernelStack: 10184 kB' 'PageTables: 4812 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 219612 kB' 'Slab: 515680 kB' 'SReclaimable: 219612 kB' 'SUnreclaim: 296068 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.743 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.744 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.744 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.744 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.744 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.744 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.744 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.744 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.744 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.744 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.744 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.744 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.744 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.744 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.744 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.744 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.744 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.744 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.744 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.744 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.744 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.744 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.744 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.744 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.744 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.744 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.744 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.744 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.744 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.744 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.744 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.744 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.744 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.744 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.744 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.744 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.744 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.744 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.744 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.744 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.744 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.744 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.744 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.744 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.744 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.744 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.744 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.744 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.744 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.744 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.744 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.744 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.744 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.744 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.744 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.744 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.744 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.744 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.744 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.744 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.744 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.744 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.744 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.744 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.744 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.744 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.744 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.744 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.744 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.744 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.744 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.744 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.744 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.744 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.744 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.744 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.744 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.744 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.744 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.744 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.744 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.744 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.744 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.744 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.744 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.744 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.744 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.744 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.744 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:02.744 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.744 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.744 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.744 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:04:02.744 00:11:33 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:04:02.744 00:11:33 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:02.744 00:11:33 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:02.744 00:11:33 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:02.744 00:11:33 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:02.744 00:11:33 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@127 -- # echo 'node0=1024 expecting 1024' 00:04:02.744 node0=1024 expecting 1024 00:04:02.744 00:11:33 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@129 -- # [[ 1024 == \1\0\2\4 ]] 00:04:02.744 00:04:02.744 real 0m9.229s 00:04:02.744 user 0m2.144s 00:04:02.744 sys 0m4.003s 00:04:02.744 00:11:33 setup.sh.hugepages.single_node_setup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:02.744 00:11:33 setup.sh.hugepages.single_node_setup -- common/autotest_common.sh@10 -- # set +x 00:04:02.744 ************************************ 00:04:02.744 END TEST single_node_setup 00:04:02.744 ************************************ 00:04:02.744 00:11:33 setup.sh.hugepages -- setup/hugepages.sh@201 -- # run_test even_2G_alloc even_2G_alloc 00:04:02.744 00:11:33 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:02.744 00:11:33 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:02.744 00:11:33 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:02.744 ************************************ 00:04:02.744 START TEST even_2G_alloc 00:04:02.744 ************************************ 00:04:02.744 00:11:33 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # even_2G_alloc 00:04:02.744 00:11:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@142 -- # get_test_nr_hugepages 2097152 00:04:02.744 00:11:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@48 -- # local size=2097152 00:04:02.744 00:11:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:04:02.744 00:11:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:04:02.744 00:11:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:04:02.744 00:11:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:04:02.744 00:11:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:04:02.744 00:11:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:04:02.744 00:11:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:04:02.744 00:11:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:04:02.744 00:11:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:04:02.744 00:11:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:04:02.744 00:11:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:04:02.744 00:11:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@73 -- # (( 0 > 0 )) 00:04:02.744 00:11:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:04:02.744 00:11:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=512 00:04:02.744 00:11:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # : 512 00:04:02.744 00:11:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 1 00:04:02.744 00:11:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:04:02.744 00:11:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=512 00:04:02.744 00:11:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # : 0 00:04:02.744 00:11:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:02.744 00:11:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:04:02.744 00:11:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@143 -- # NRHUGE=1024 00:04:02.744 00:11:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@143 -- # setup output 00:04:02.744 00:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:02.744 00:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:06.057 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:06.057 0000:1a:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:06.057 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:06.057 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:06.057 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:06.057 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:06.057 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:06.057 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:06.057 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:06.057 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:06.057 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:06.057 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:06.057 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:06.057 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:06.057 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:06.316 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:06.316 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:08.857 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@144 -- # verify_nr_hugepages 00:04:08.857 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@88 -- # local node 00:04:08.857 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:04:08.857 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:04:08.857 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local surp 00:04:08.857 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local resv 00:04:08.857 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local anon 00:04:08.857 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:08.857 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:04:08.857 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:08.857 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:08.857 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:08.857 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:08.857 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.857 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.857 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.857 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.857 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.857 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285444 kB' 'MemFree: 74180072 kB' 'MemAvailable: 78380184 kB' 'Buffers: 9772 kB' 'Cached: 12524240 kB' 'SwapCached: 0 kB' 'Active: 8941232 kB' 'Inactive: 4107900 kB' 'Active(anon): 8518940 kB' 'Inactive(anon): 0 kB' 'Active(file): 422292 kB' 'Inactive(file): 4107900 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 518460 kB' 'Mapped: 198236 kB' 'Shmem: 8003820 kB' 'KReclaimable: 505840 kB' 'Slab: 1111488 kB' 'SReclaimable: 505840 kB' 'SUnreclaim: 605648 kB' 'KernelStack: 17376 kB' 'PageTables: 8280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482748 kB' 'Committed_AS: 9840604 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213144 kB' 'VmallocChunk: 0 kB' 'Percpu: 69984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 486840 kB' 'DirectMap2M: 7577600 kB' 'DirectMap1G: 94371840 kB' 00:04:08.857 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.857 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.857 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.857 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.857 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.857 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.857 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.857 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.857 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.857 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.857 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.857 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.857 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.857 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.857 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.857 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.857 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.857 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.857 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.857 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.857 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.857 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.857 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.857 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.857 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.857 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.857 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.858 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # anon=0 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285444 kB' 'MemFree: 74180600 kB' 'MemAvailable: 78380712 kB' 'Buffers: 9772 kB' 'Cached: 12524244 kB' 'SwapCached: 0 kB' 'Active: 8941416 kB' 'Inactive: 4107900 kB' 'Active(anon): 8519124 kB' 'Inactive(anon): 0 kB' 'Active(file): 422292 kB' 'Inactive(file): 4107900 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 518264 kB' 'Mapped: 198112 kB' 'Shmem: 8003824 kB' 'KReclaimable: 505840 kB' 'Slab: 1111424 kB' 'SReclaimable: 505840 kB' 'SUnreclaim: 605584 kB' 'KernelStack: 17360 kB' 'PageTables: 8216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482748 kB' 'Committed_AS: 9840620 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213128 kB' 'VmallocChunk: 0 kB' 'Percpu: 69984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 486840 kB' 'DirectMap2M: 7577600 kB' 'DirectMap1G: 94371840 kB' 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.859 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@98 -- # surp=0 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.860 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.861 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285444 kB' 'MemFree: 74181668 kB' 'MemAvailable: 78381780 kB' 'Buffers: 9772 kB' 'Cached: 12524264 kB' 'SwapCached: 0 kB' 'Active: 8942348 kB' 'Inactive: 4107900 kB' 'Active(anon): 8520056 kB' 'Inactive(anon): 0 kB' 'Active(file): 422292 kB' 'Inactive(file): 4107900 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519580 kB' 'Mapped: 198616 kB' 'Shmem: 8003844 kB' 'KReclaimable: 505840 kB' 'Slab: 1111424 kB' 'SReclaimable: 505840 kB' 'SUnreclaim: 605584 kB' 'KernelStack: 17360 kB' 'PageTables: 8252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482748 kB' 'Committed_AS: 9844352 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213128 kB' 'VmallocChunk: 0 kB' 'Percpu: 69984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 486840 kB' 'DirectMap2M: 7577600 kB' 'DirectMap1G: 94371840 kB' 00:04:08.861 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.861 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.861 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.861 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.861 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.861 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.861 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.861 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.861 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.861 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.861 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.861 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.861 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.861 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.861 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.861 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.861 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.861 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.861 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.861 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.861 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.861 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.861 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.861 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.861 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.861 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.861 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.861 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.861 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.861 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.861 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.861 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.861 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.861 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.861 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.861 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.861 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.861 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.861 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.861 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.861 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.861 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.861 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.861 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.861 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.861 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.861 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.861 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.861 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.861 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.861 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.861 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.861 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.861 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.861 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.861 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.861 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.861 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.861 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.861 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.861 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.861 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.861 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.861 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.861 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.861 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.861 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.861 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.861 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.861 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.861 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.861 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.861 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.861 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.861 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.861 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.861 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.861 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.861 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.861 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.861 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.861 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.861 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.861 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.861 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.861 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.861 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.861 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.861 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.861 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.861 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.861 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.861 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.861 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.861 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.861 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.861 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.861 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.862 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.862 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.862 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.862 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.862 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.862 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.862 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.862 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.862 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.862 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.862 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.862 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.862 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.862 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.862 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.862 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.862 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.862 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.862 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.862 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.862 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.862 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.862 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.862 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.862 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.862 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.862 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.862 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.862 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.862 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.862 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.862 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.862 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.862 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.862 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.862 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.862 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.862 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.862 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.862 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.862 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.862 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.862 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.862 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.862 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.862 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.862 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.862 00:11:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.862 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.862 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.862 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.862 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.862 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.862 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.862 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.862 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.862 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.862 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.862 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.862 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.862 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.862 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.862 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.862 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.862 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.862 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.862 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.862 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.862 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.862 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.862 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.862 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.862 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.862 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.862 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.862 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.862 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.862 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.862 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.862 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.862 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.862 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.862 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.862 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.862 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.862 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.862 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.862 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.862 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.862 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.862 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.862 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.862 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.862 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.862 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.862 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.862 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.862 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.862 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.862 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.862 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.862 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.862 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.862 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:08.862 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:08.862 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # resv=0 00:04:08.862 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:04:08.862 nr_hugepages=1024 00:04:08.862 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:04:08.862 resv_hugepages=0 00:04:08.862 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:04:08.862 surplus_hugepages=0 00:04:08.862 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:04:08.862 anon_hugepages=0 00:04:08.862 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:08.862 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:04:08.862 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:04:08.862 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:08.862 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:08.862 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:08.862 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:08.862 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.862 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.862 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.862 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.862 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.862 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.862 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285444 kB' 'MemFree: 74173448 kB' 'MemAvailable: 78373560 kB' 'Buffers: 9772 kB' 'Cached: 12524268 kB' 'SwapCached: 0 kB' 'Active: 8947456 kB' 'Inactive: 4107900 kB' 'Active(anon): 8525164 kB' 'Inactive(anon): 0 kB' 'Active(file): 422292 kB' 'Inactive(file): 4107900 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524684 kB' 'Mapped: 198616 kB' 'Shmem: 8003848 kB' 'KReclaimable: 505840 kB' 'Slab: 1111424 kB' 'SReclaimable: 505840 kB' 'SUnreclaim: 605584 kB' 'KernelStack: 17344 kB' 'PageTables: 8184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482748 kB' 'Committed_AS: 9849400 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213132 kB' 'VmallocChunk: 0 kB' 'Percpu: 69984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 486840 kB' 'DirectMap2M: 7577600 kB' 'DirectMap1G: 94371840 kB' 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.863 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@26 -- # local node 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48064864 kB' 'MemFree: 35069288 kB' 'MemUsed: 12995576 kB' 'SwapCached: 0 kB' 'Active: 6388440 kB' 'Inactive: 3878016 kB' 'Active(anon): 6178208 kB' 'Inactive(anon): 0 kB' 'Active(file): 210232 kB' 'Inactive(file): 3878016 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10046500 kB' 'Mapped: 99516 kB' 'AnonPages: 223216 kB' 'Shmem: 5958252 kB' 'KernelStack: 10136 kB' 'PageTables: 4656 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 219612 kB' 'Slab: 515476 kB' 'SReclaimable: 219612 kB' 'SUnreclaim: 295864 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.864 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:08.865 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 1 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44220580 kB' 'MemFree: 39103656 kB' 'MemUsed: 5116924 kB' 'SwapCached: 0 kB' 'Active: 2554312 kB' 'Inactive: 229884 kB' 'Active(anon): 2342252 kB' 'Inactive(anon): 0 kB' 'Active(file): 212060 kB' 'Inactive(file): 229884 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2487540 kB' 'Mapped: 98748 kB' 'AnonPages: 296704 kB' 'Shmem: 2045596 kB' 'KernelStack: 7496 kB' 'PageTables: 4356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 286228 kB' 'Slab: 595948 kB' 'SReclaimable: 286228 kB' 'SUnreclaim: 309720 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.866 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.867 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.867 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.867 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.867 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.867 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.867 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.867 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.867 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.867 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.867 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.867 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.867 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.867 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.867 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.867 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.867 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.867 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.867 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.867 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.867 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.867 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.867 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.867 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.867 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.867 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.867 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.867 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.867 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.867 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.867 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.867 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.867 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.867 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.867 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.867 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.867 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.867 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.867 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.867 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.867 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.867 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.867 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.867 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.867 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.867 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.867 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.867 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.867 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.867 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.867 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:08.867 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.867 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.867 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.867 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:08.867 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:08.867 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:08.867 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:08.867 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:08.867 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:08.867 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # echo 'node0=512 expecting 512' 00:04:08.867 node0=512 expecting 512 00:04:08.867 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:08.867 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:08.867 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:08.867 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # echo 'node1=512 expecting 512' 00:04:08.867 node1=512 expecting 512 00:04:08.867 00:11:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@129 -- # [[ 512 == \5\1\2 ]] 00:04:08.867 00:04:08.867 real 0m5.785s 00:04:08.867 user 0m2.017s 00:04:08.867 sys 0m3.787s 00:04:08.867 00:11:39 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:08.867 00:11:39 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:08.867 ************************************ 00:04:08.867 END TEST even_2G_alloc 00:04:08.867 ************************************ 00:04:08.867 00:11:39 setup.sh.hugepages -- setup/hugepages.sh@202 -- # run_test odd_alloc odd_alloc 00:04:08.867 00:11:39 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:08.867 00:11:39 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:08.867 00:11:39 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:08.867 ************************************ 00:04:08.867 START TEST odd_alloc 00:04:08.867 ************************************ 00:04:08.867 00:11:39 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # odd_alloc 00:04:08.867 00:11:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@149 -- # get_test_nr_hugepages 2098176 00:04:08.867 00:11:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@48 -- # local size=2098176 00:04:08.867 00:11:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:04:08.867 00:11:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:04:08.867 00:11:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1025 00:04:08.867 00:11:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:04:08.867 00:11:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:04:08.867 00:11:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:04:08.867 00:11:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1025 00:04:08.867 00:11:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:04:08.867 00:11:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:04:08.867 00:11:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:04:08.867 00:11:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:04:08.867 00:11:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@73 -- # (( 0 > 0 )) 00:04:08.867 00:11:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:04:08.867 00:11:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=512 00:04:08.867 00:11:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # : 513 00:04:08.867 00:11:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 1 00:04:08.867 00:11:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:04:08.867 00:11:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=513 00:04:08.867 00:11:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # : 0 00:04:08.867 00:11:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:08.867 00:11:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:04:08.867 00:11:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@150 -- # HUGEMEM=2049 00:04:08.867 00:11:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@150 -- # setup output 00:04:08.867 00:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:08.867 00:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:12.158 0000:1a:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:12.158 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:12.158 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:12.158 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:12.158 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:12.158 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:12.158 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:12.158 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:12.158 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:12.158 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:12.158 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:12.158 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:12.158 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:12.158 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:12.158 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:12.158 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:12.158 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:14.699 00:11:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@151 -- # verify_nr_hugepages 00:04:14.699 00:11:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@88 -- # local node 00:04:14.699 00:11:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:04:14.699 00:11:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:04:14.699 00:11:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local surp 00:04:14.699 00:11:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local resv 00:04:14.699 00:11:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local anon 00:04:14.699 00:11:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:14.699 00:11:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:04:14.699 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:14.699 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:14.699 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:14.699 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:14.699 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.699 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:14.699 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:14.699 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.699 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.699 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.699 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.699 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285444 kB' 'MemFree: 74219124 kB' 'MemAvailable: 78419188 kB' 'Buffers: 9772 kB' 'Cached: 12524444 kB' 'SwapCached: 0 kB' 'Active: 8942060 kB' 'Inactive: 4107900 kB' 'Active(anon): 8519768 kB' 'Inactive(anon): 0 kB' 'Active(file): 422292 kB' 'Inactive(file): 4107900 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 518480 kB' 'Mapped: 198320 kB' 'Shmem: 8004024 kB' 'KReclaimable: 505792 kB' 'Slab: 1111552 kB' 'SReclaimable: 505792 kB' 'SUnreclaim: 605760 kB' 'KernelStack: 17376 kB' 'PageTables: 8296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53481724 kB' 'Committed_AS: 9841328 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213160 kB' 'VmallocChunk: 0 kB' 'Percpu: 69984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 486840 kB' 'DirectMap2M: 7577600 kB' 'DirectMap1G: 94371840 kB' 00:04:14.699 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.699 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.699 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.699 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.699 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.699 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.699 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.699 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.699 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.699 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.699 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.699 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.699 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.699 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.699 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.699 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.699 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.699 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.699 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.699 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.699 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.699 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.699 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.699 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.699 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.699 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.699 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.699 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.699 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.699 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.699 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.699 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.699 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.699 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.699 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.699 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # anon=0 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:14.700 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285444 kB' 'MemFree: 74220416 kB' 'MemAvailable: 78420480 kB' 'Buffers: 9772 kB' 'Cached: 12524448 kB' 'SwapCached: 0 kB' 'Active: 8941092 kB' 'Inactive: 4107900 kB' 'Active(anon): 8518800 kB' 'Inactive(anon): 0 kB' 'Active(file): 422292 kB' 'Inactive(file): 4107900 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 518044 kB' 'Mapped: 198192 kB' 'Shmem: 8004028 kB' 'KReclaimable: 505792 kB' 'Slab: 1111536 kB' 'SReclaimable: 505792 kB' 'SUnreclaim: 605744 kB' 'KernelStack: 17376 kB' 'PageTables: 8268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53481724 kB' 'Committed_AS: 9841344 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213128 kB' 'VmallocChunk: 0 kB' 'Percpu: 69984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 486840 kB' 'DirectMap2M: 7577600 kB' 'DirectMap1G: 94371840 kB' 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.701 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@98 -- # surp=0 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285444 kB' 'MemFree: 74220592 kB' 'MemAvailable: 78420656 kB' 'Buffers: 9772 kB' 'Cached: 12524464 kB' 'SwapCached: 0 kB' 'Active: 8940556 kB' 'Inactive: 4107900 kB' 'Active(anon): 8518264 kB' 'Inactive(anon): 0 kB' 'Active(file): 422292 kB' 'Inactive(file): 4107900 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517476 kB' 'Mapped: 198192 kB' 'Shmem: 8004044 kB' 'KReclaimable: 505792 kB' 'Slab: 1111536 kB' 'SReclaimable: 505792 kB' 'SUnreclaim: 605744 kB' 'KernelStack: 17344 kB' 'PageTables: 8172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53481724 kB' 'Committed_AS: 9841792 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213112 kB' 'VmallocChunk: 0 kB' 'Percpu: 69984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 486840 kB' 'DirectMap2M: 7577600 kB' 'DirectMap1G: 94371840 kB' 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.702 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.703 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # resv=0 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1025 00:04:14.704 nr_hugepages=1025 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:04:14.704 resv_hugepages=0 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:04:14.704 surplus_hugepages=0 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:04:14.704 anon_hugepages=0 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@106 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@108 -- # (( 1025 == nr_hugepages )) 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285444 kB' 'MemFree: 74217092 kB' 'MemAvailable: 78417156 kB' 'Buffers: 9772 kB' 'Cached: 12524504 kB' 'SwapCached: 0 kB' 'Active: 8944312 kB' 'Inactive: 4107900 kB' 'Active(anon): 8522020 kB' 'Inactive(anon): 0 kB' 'Active(file): 422292 kB' 'Inactive(file): 4107900 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521180 kB' 'Mapped: 198696 kB' 'Shmem: 8004084 kB' 'KReclaimable: 505792 kB' 'Slab: 1111536 kB' 'SReclaimable: 505792 kB' 'SUnreclaim: 605744 kB' 'KernelStack: 17360 kB' 'PageTables: 8220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53481724 kB' 'Committed_AS: 9845908 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213112 kB' 'VmallocChunk: 0 kB' 'Percpu: 69984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 486840 kB' 'DirectMap2M: 7577600 kB' 'DirectMap1G: 94371840 kB' 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.704 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.705 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@26 -- # local node 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=513 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48064864 kB' 'MemFree: 35081024 kB' 'MemUsed: 12983840 kB' 'SwapCached: 0 kB' 'Active: 6387424 kB' 'Inactive: 3878016 kB' 'Active(anon): 6177192 kB' 'Inactive(anon): 0 kB' 'Active(file): 210232 kB' 'Inactive(file): 3878016 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10046708 kB' 'Mapped: 99536 kB' 'AnonPages: 221952 kB' 'Shmem: 5958460 kB' 'KernelStack: 10168 kB' 'PageTables: 4708 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 219564 kB' 'Slab: 515216 kB' 'SReclaimable: 219564 kB' 'SUnreclaim: 295652 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.706 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 1 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44220580 kB' 'MemFree: 39131904 kB' 'MemUsed: 5088676 kB' 'SwapCached: 0 kB' 'Active: 2553716 kB' 'Inactive: 229884 kB' 'Active(anon): 2341656 kB' 'Inactive(anon): 0 kB' 'Active(file): 212060 kB' 'Inactive(file): 229884 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2487572 kB' 'Mapped: 98808 kB' 'AnonPages: 296092 kB' 'Shmem: 2045628 kB' 'KernelStack: 7208 kB' 'PageTables: 3564 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 286228 kB' 'Slab: 596320 kB' 'SReclaimable: 286228 kB' 'SUnreclaim: 310092 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.707 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.708 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.709 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.709 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.709 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.709 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.709 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.709 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.709 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.709 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:14.709 00:11:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:14.709 00:11:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:14.709 00:11:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:14.709 00:11:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:14.709 00:11:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:14.709 00:11:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # echo 'node0=513 expecting 513' 00:04:14.709 node0=513 expecting 513 00:04:14.709 00:11:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:14.709 00:11:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:14.709 00:11:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:14.709 00:11:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # echo 'node1=512 expecting 512' 00:04:14.709 node1=512 expecting 512 00:04:14.709 00:11:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@129 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:14.709 00:04:14.709 real 0m5.754s 00:04:14.709 user 0m1.968s 00:04:14.709 sys 0m3.805s 00:04:14.709 00:11:44 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:14.709 00:11:44 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:14.709 ************************************ 00:04:14.709 END TEST odd_alloc 00:04:14.709 ************************************ 00:04:14.709 00:11:44 setup.sh.hugepages -- setup/hugepages.sh@203 -- # run_test custom_alloc custom_alloc 00:04:14.709 00:11:44 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:14.709 00:11:44 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:14.709 00:11:44 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:14.709 ************************************ 00:04:14.709 START TEST custom_alloc 00:04:14.709 ************************************ 00:04:14.709 00:11:44 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # custom_alloc 00:04:14.709 00:11:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@157 -- # local IFS=, 00:04:14.709 00:11:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@159 -- # local node 00:04:14.709 00:11:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@160 -- # nodes_hp=() 00:04:14.709 00:11:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@160 -- # local nodes_hp 00:04:14.709 00:11:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@162 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:14.709 00:11:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@164 -- # get_test_nr_hugepages 1048576 00:04:14.709 00:11:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@48 -- # local size=1048576 00:04:14.709 00:11:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:04:14.709 00:11:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:04:14.709 00:11:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=512 00:04:14.709 00:11:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:04:14.709 00:11:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:04:14.709 00:11:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:04:14.709 00:11:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=512 00:04:14.709 00:11:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:04:14.709 00:11:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:04:14.709 00:11:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:04:14.709 00:11:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:04:14.709 00:11:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@73 -- # (( 0 > 0 )) 00:04:14.709 00:11:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:04:14.709 00:11:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=256 00:04:14.709 00:11:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # : 256 00:04:14.709 00:11:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 1 00:04:14.709 00:11:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:04:14.709 00:11:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=256 00:04:14.709 00:11:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # : 0 00:04:14.709 00:11:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:14.709 00:11:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:04:14.709 00:11:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@165 -- # nodes_hp[0]=512 00:04:14.709 00:11:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@166 -- # (( 2 > 1 )) 00:04:14.709 00:11:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # get_test_nr_hugepages 2097152 00:04:14.709 00:11:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@48 -- # local size=2097152 00:04:14.709 00:11:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:04:14.709 00:11:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:04:14.709 00:11:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:04:14.709 00:11:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:04:14.709 00:11:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:04:14.709 00:11:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:04:14.709 00:11:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:04:14.709 00:11:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:04:14.709 00:11:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:04:14.709 00:11:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:04:14.709 00:11:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:04:14.709 00:11:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@73 -- # (( 1 > 0 )) 00:04:14.709 00:11:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:14.709 00:11:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # nodes_test[_no_nodes]=512 00:04:14.709 00:11:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@77 -- # return 0 00:04:14.709 00:11:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@168 -- # nodes_hp[1]=1024 00:04:14.709 00:11:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@171 -- # for node in "${!nodes_hp[@]}" 00:04:14.709 00:11:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:14.709 00:11:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@173 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:14.709 00:11:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@171 -- # for node in "${!nodes_hp[@]}" 00:04:14.709 00:11:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:14.709 00:11:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@173 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:14.709 00:11:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # get_test_nr_hugepages_per_node 00:04:14.709 00:11:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:04:14.709 00:11:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:04:14.709 00:11:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:04:14.709 00:11:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:04:14.709 00:11:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:04:14.709 00:11:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:04:14.709 00:11:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:04:14.709 00:11:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@73 -- # (( 2 > 0 )) 00:04:14.709 00:11:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:14.709 00:11:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # nodes_test[_no_nodes]=512 00:04:14.709 00:11:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:14.709 00:11:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # nodes_test[_no_nodes]=1024 00:04:14.709 00:11:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@77 -- # return 0 00:04:14.709 00:11:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:14.709 00:11:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # setup output 00:04:14.709 00:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:14.709 00:11:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:18.004 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:18.004 0000:1a:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:18.005 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:18.005 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:18.005 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:18.005 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:18.005 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:18.005 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:18.005 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:18.005 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:18.005 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:18.005 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:18.005 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:18.005 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:18.005 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:18.005 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:18.005 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:19.924 00:11:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nr_hugepages=1536 00:04:19.924 00:11:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # verify_nr_hugepages 00:04:19.924 00:11:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@88 -- # local node 00:04:19.924 00:11:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:04:19.924 00:11:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:04:19.924 00:11:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local surp 00:04:19.924 00:11:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local resv 00:04:19.924 00:11:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local anon 00:04:19.924 00:11:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:20.189 00:11:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:04:20.189 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:20.189 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:20.189 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:20.189 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:20.189 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:20.189 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:20.189 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:20.189 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:20.189 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:20.189 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285444 kB' 'MemFree: 73160760 kB' 'MemAvailable: 77360824 kB' 'Buffers: 9772 kB' 'Cached: 12524644 kB' 'SwapCached: 0 kB' 'Active: 8944412 kB' 'Inactive: 4107900 kB' 'Active(anon): 8522120 kB' 'Inactive(anon): 0 kB' 'Active(file): 422292 kB' 'Inactive(file): 4107900 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521224 kB' 'Mapped: 199336 kB' 'Shmem: 8004224 kB' 'KReclaimable: 505792 kB' 'Slab: 1111560 kB' 'SReclaimable: 505792 kB' 'SUnreclaim: 605768 kB' 'KernelStack: 17440 kB' 'PageTables: 8352 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52958460 kB' 'Committed_AS: 9876548 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213176 kB' 'VmallocChunk: 0 kB' 'Percpu: 69984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 486840 kB' 'DirectMap2M: 7577600 kB' 'DirectMap1G: 94371840 kB' 00:04:20.189 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.189 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.189 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.189 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.189 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.189 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.189 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.189 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.189 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.189 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.189 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.189 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.189 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.190 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # anon=0 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285444 kB' 'MemFree: 73160412 kB' 'MemAvailable: 77360476 kB' 'Buffers: 9772 kB' 'Cached: 12524644 kB' 'SwapCached: 0 kB' 'Active: 8944680 kB' 'Inactive: 4107900 kB' 'Active(anon): 8522388 kB' 'Inactive(anon): 0 kB' 'Active(file): 422292 kB' 'Inactive(file): 4107900 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521472 kB' 'Mapped: 199236 kB' 'Shmem: 8004224 kB' 'KReclaimable: 505792 kB' 'Slab: 1111544 kB' 'SReclaimable: 505792 kB' 'SUnreclaim: 605752 kB' 'KernelStack: 17440 kB' 'PageTables: 8348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52958460 kB' 'Committed_AS: 9876564 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213160 kB' 'VmallocChunk: 0 kB' 'Percpu: 69984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 486840 kB' 'DirectMap2M: 7577600 kB' 'DirectMap1G: 94371840 kB' 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.191 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.192 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@98 -- # surp=0 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285444 kB' 'MemFree: 73159708 kB' 'MemAvailable: 77359772 kB' 'Buffers: 9772 kB' 'Cached: 12524664 kB' 'SwapCached: 0 kB' 'Active: 8945296 kB' 'Inactive: 4107900 kB' 'Active(anon): 8523004 kB' 'Inactive(anon): 0 kB' 'Active(file): 422292 kB' 'Inactive(file): 4107900 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522000 kB' 'Mapped: 199740 kB' 'Shmem: 8004244 kB' 'KReclaimable: 505792 kB' 'Slab: 1111544 kB' 'SReclaimable: 505792 kB' 'SUnreclaim: 605752 kB' 'KernelStack: 17456 kB' 'PageTables: 8400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52958460 kB' 'Committed_AS: 9877808 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213192 kB' 'VmallocChunk: 0 kB' 'Percpu: 69984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 486840 kB' 'DirectMap2M: 7577600 kB' 'DirectMap1G: 94371840 kB' 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.193 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.194 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.195 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.195 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.195 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.195 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.195 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.195 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.195 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.195 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.195 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.195 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.195 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.195 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.195 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:20.195 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:20.195 00:11:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # resv=0 00:04:20.195 00:11:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1536 00:04:20.195 nr_hugepages=1536 00:04:20.195 00:11:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:04:20.195 resv_hugepages=0 00:04:20.195 00:11:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:04:20.195 surplus_hugepages=0 00:04:20.195 00:11:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:04:20.195 anon_hugepages=0 00:04:20.195 00:11:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@106 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:20.195 00:11:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@108 -- # (( 1536 == nr_hugepages )) 00:04:20.195 00:11:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:04:20.195 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:20.195 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:20.195 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:20.195 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:20.195 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:20.195 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:20.195 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:20.195 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:20.195 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:20.195 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.195 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.195 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285444 kB' 'MemFree: 73159204 kB' 'MemAvailable: 77359268 kB' 'Buffers: 9772 kB' 'Cached: 12524664 kB' 'SwapCached: 0 kB' 'Active: 8947612 kB' 'Inactive: 4107900 kB' 'Active(anon): 8525320 kB' 'Inactive(anon): 0 kB' 'Active(file): 422292 kB' 'Inactive(file): 4107900 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524316 kB' 'Mapped: 199740 kB' 'Shmem: 8004244 kB' 'KReclaimable: 505792 kB' 'Slab: 1111544 kB' 'SReclaimable: 505792 kB' 'SUnreclaim: 605752 kB' 'KernelStack: 17440 kB' 'PageTables: 8352 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52958460 kB' 'Committed_AS: 9879812 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213176 kB' 'VmallocChunk: 0 kB' 'Percpu: 69984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 486840 kB' 'DirectMap2M: 7577600 kB' 'DirectMap1G: 94371840 kB' 00:04:20.195 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.195 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.195 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.195 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.195 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.195 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.195 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.195 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.195 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.195 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.195 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.195 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.195 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.195 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.195 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.195 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.195 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.196 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.196 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.196 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.196 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.196 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.196 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.196 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.196 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.196 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.196 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.196 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.196 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.196 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.196 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.196 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.196 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.196 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.196 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.196 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.196 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.196 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.196 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.196 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.196 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.196 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.197 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.197 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.197 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.197 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.197 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.197 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.197 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.197 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.197 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.197 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.197 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.197 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.197 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.197 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.197 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.197 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.197 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.197 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.197 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.197 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.197 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.197 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.197 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.197 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.197 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.198 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.198 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.198 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.198 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.198 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.198 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.198 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.198 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.198 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.198 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.198 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.198 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.198 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.198 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.198 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.198 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.198 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.198 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.198 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.198 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.199 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.199 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.199 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.199 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.199 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.199 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.199 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.199 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.199 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.199 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.199 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.199 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.199 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.199 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.199 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.199 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.199 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.199 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.199 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.199 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.199 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.199 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.199 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.199 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.199 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.199 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.199 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.200 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.200 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.200 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.200 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.200 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.200 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.200 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.200 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.200 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.200 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.200 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.200 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.200 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.200 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.200 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.200 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.200 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.200 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.200 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.200 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.200 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.200 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.200 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.200 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.200 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.201 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.201 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.201 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.201 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.201 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.201 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.201 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.201 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.201 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.201 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.201 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.201 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.201 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.201 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.201 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.201 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.201 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.202 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.202 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.202 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.202 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.202 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.202 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.202 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.202 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.202 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.202 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.202 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.202 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.202 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.202 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.203 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.203 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.203 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.203 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.203 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.203 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.203 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.203 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.203 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.203 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.203 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.203 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.203 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.203 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.203 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.203 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.203 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.203 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.203 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.203 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.204 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.204 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.204 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.204 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:20.204 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:20.204 00:11:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:20.204 00:11:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:04:20.204 00:11:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@26 -- # local node 00:04:20.204 00:11:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:20.204 00:11:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:04:20.204 00:11:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:20.204 00:11:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:04:20.204 00:11:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:04:20.204 00:11:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:04:20.204 00:11:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:20.204 00:11:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:20.204 00:11:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:04:20.204 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:20.204 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:20.204 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:20.205 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:20.205 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:20.205 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:20.205 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:20.205 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:20.205 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:20.205 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.205 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.205 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48064864 kB' 'MemFree: 35073752 kB' 'MemUsed: 12991112 kB' 'SwapCached: 0 kB' 'Active: 6394648 kB' 'Inactive: 3878016 kB' 'Active(anon): 6184416 kB' 'Inactive(anon): 0 kB' 'Active(file): 210232 kB' 'Inactive(file): 3878016 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10046844 kB' 'Mapped: 99772 kB' 'AnonPages: 228988 kB' 'Shmem: 5958596 kB' 'KernelStack: 10200 kB' 'PageTables: 4872 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 219564 kB' 'Slab: 515480 kB' 'SReclaimable: 219564 kB' 'SUnreclaim: 295916 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:20.205 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.205 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.205 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.205 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.205 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.205 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.205 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.205 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.205 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.206 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.206 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.206 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.206 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.206 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.206 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.206 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.206 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.206 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.206 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.206 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.206 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.206 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.206 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.206 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.206 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.206 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.206 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.206 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.206 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.206 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.206 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.206 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.206 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.206 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.206 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.206 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.206 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.206 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.207 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.207 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.207 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.207 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.207 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.207 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.207 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.207 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.207 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.207 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.207 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.207 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.207 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.207 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.207 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.207 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.207 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.207 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.207 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.207 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.207 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.207 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.207 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.207 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.207 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.207 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.207 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.207 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.208 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.208 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.208 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.208 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.208 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.208 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.208 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.208 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.208 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.208 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.208 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.208 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.208 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.208 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.208 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.208 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.208 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.208 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.208 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.208 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.208 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.208 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.208 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.208 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.208 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.208 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.208 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.208 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.208 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.208 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.209 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.209 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.209 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.209 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.209 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.209 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.209 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.209 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.209 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.209 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.209 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.209 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.209 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.209 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.209 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.209 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.209 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.209 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.209 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.209 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.209 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.209 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.209 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.209 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.210 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.210 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.210 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.210 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.210 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.210 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.210 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.210 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.210 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.210 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.210 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.210 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.210 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.210 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.210 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.210 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.210 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.210 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.210 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.210 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.210 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.211 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.211 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.211 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.211 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.211 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:20.211 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:20.211 00:11:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:20.211 00:11:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:20.211 00:11:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:20.211 00:11:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 1 00:04:20.211 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:20.211 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:20.211 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:20.211 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:20.211 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:20.211 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:20.211 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:20.211 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:20.211 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:20.212 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44220580 kB' 'MemFree: 38084816 kB' 'MemUsed: 6135764 kB' 'SwapCached: 0 kB' 'Active: 2556392 kB' 'Inactive: 229884 kB' 'Active(anon): 2344332 kB' 'Inactive(anon): 0 kB' 'Active(file): 212060 kB' 'Inactive(file): 229884 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2487608 kB' 'Mapped: 99876 kB' 'AnonPages: 298848 kB' 'Shmem: 2045664 kB' 'KernelStack: 7304 kB' 'PageTables: 3716 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 286228 kB' 'Slab: 596064 kB' 'SReclaimable: 286228 kB' 'SUnreclaim: 309836 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:20.212 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.212 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.212 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.212 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.212 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.212 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.212 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.212 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.212 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.212 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.212 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.212 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.212 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.212 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.212 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.213 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.213 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.213 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.213 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.213 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.213 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.213 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.213 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.213 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.213 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.213 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.213 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.213 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.213 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.213 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.213 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.213 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.213 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.213 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.213 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.213 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.213 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.213 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.213 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.213 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.213 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.213 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.213 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.213 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.213 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.213 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.213 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.213 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.213 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.213 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.213 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.213 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.213 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.213 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.213 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.213 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.213 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.213 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.213 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.213 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.213 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.213 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.213 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.213 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.213 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.213 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.213 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.213 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.213 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.213 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.213 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.213 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.213 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.213 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.213 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.213 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.213 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.214 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.214 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.214 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.214 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.214 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.214 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.214 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.214 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.214 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.214 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.214 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.214 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.214 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.214 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.214 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.214 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.214 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.214 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.214 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.214 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.214 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.214 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.214 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.214 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.214 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.214 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.214 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.214 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.214 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.214 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.214 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.214 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.214 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.214 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.214 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.214 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.214 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.214 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.214 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.214 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.214 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.214 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.214 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.214 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.214 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.214 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.214 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.214 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.214 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.214 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.214 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.214 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.214 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.214 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.214 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.214 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.214 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.214 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.214 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.214 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.214 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.214 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.214 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.214 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.214 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.214 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.214 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:20.214 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.214 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.214 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.214 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:20.214 00:11:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:20.214 00:11:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:20.214 00:11:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:20.214 00:11:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:20.214 00:11:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:20.214 00:11:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # echo 'node0=512 expecting 512' 00:04:20.214 node0=512 expecting 512 00:04:20.214 00:11:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:20.214 00:11:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:20.214 00:11:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:20.214 00:11:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # echo 'node1=1024 expecting 1024' 00:04:20.214 node1=1024 expecting 1024 00:04:20.214 00:11:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@129 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:20.214 00:04:20.214 real 0m5.737s 00:04:20.214 user 0m1.742s 00:04:20.214 sys 0m3.709s 00:04:20.214 00:11:50 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:20.214 00:11:50 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:20.214 ************************************ 00:04:20.214 END TEST custom_alloc 00:04:20.214 ************************************ 00:04:20.214 00:11:50 setup.sh.hugepages -- setup/hugepages.sh@204 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:20.214 00:11:50 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:20.214 00:11:50 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:20.214 00:11:50 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:20.214 ************************************ 00:04:20.214 START TEST no_shrink_alloc 00:04:20.214 ************************************ 00:04:20.214 00:11:50 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # no_shrink_alloc 00:04:20.214 00:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@185 -- # get_test_nr_hugepages 2097152 0 00:04:20.214 00:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@48 -- # local size=2097152 00:04:20.214 00:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # (( 2 > 1 )) 00:04:20.214 00:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # shift 00:04:20.214 00:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # node_ids=('0') 00:04:20.214 00:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # local node_ids 00:04:20.214 00:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:04:20.214 00:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:04:20.214 00:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 0 00:04:20.215 00:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@61 -- # user_nodes=('0') 00:04:20.215 00:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:04:20.215 00:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:04:20.215 00:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:04:20.215 00:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:04:20.215 00:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:04:20.215 00:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@68 -- # (( 1 > 0 )) 00:04:20.215 00:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # for _no_nodes in "${user_nodes[@]}" 00:04:20.215 00:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # nodes_test[_no_nodes]=1024 00:04:20.215 00:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@72 -- # return 0 00:04:20.215 00:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@188 -- # NRHUGE=1024 00:04:20.215 00:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@188 -- # HUGENODE=0 00:04:20.215 00:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@188 -- # setup output 00:04:20.215 00:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:20.215 00:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:23.504 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:23.504 0000:1a:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:23.504 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:23.504 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:23.504 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:23.504 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:23.504 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:23.504 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:23.504 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:23.504 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:23.504 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:23.504 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:23.504 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:23.504 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:23.504 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:23.504 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:23.504 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:25.409 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@189 -- # verify_nr_hugepages 00:04:25.409 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@88 -- # local node 00:04:25.409 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:04:25.409 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:04:25.409 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local surp 00:04:25.409 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local resv 00:04:25.409 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local anon 00:04:25.409 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:25.409 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:04:25.409 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:25.409 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:25.409 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:25.409 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.409 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.410 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.410 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.410 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.410 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.410 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.410 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.410 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285444 kB' 'MemFree: 74210040 kB' 'MemAvailable: 78410104 kB' 'Buffers: 9772 kB' 'Cached: 12524848 kB' 'SwapCached: 0 kB' 'Active: 8943436 kB' 'Inactive: 4107900 kB' 'Active(anon): 8521144 kB' 'Inactive(anon): 0 kB' 'Active(file): 422292 kB' 'Inactive(file): 4107900 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520032 kB' 'Mapped: 199308 kB' 'Shmem: 8004428 kB' 'KReclaimable: 505792 kB' 'Slab: 1111000 kB' 'SReclaimable: 505792 kB' 'SUnreclaim: 605208 kB' 'KernelStack: 17424 kB' 'PageTables: 8288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482748 kB' 'Committed_AS: 9877084 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213080 kB' 'VmallocChunk: 0 kB' 'Percpu: 69984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 486840 kB' 'DirectMap2M: 7577600 kB' 'DirectMap1G: 94371840 kB' 00:04:25.410 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.410 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.410 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.410 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.410 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.410 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.410 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.410 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.410 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.410 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.410 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.410 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.410 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.410 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.410 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.410 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.410 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.410 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.410 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.410 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.410 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.410 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.410 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.410 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.410 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.410 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.410 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.410 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.410 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.410 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.410 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.410 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.410 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.410 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.410 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.410 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.410 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.674 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # anon=0 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285444 kB' 'MemFree: 74209580 kB' 'MemAvailable: 78409644 kB' 'Buffers: 9772 kB' 'Cached: 12524848 kB' 'SwapCached: 0 kB' 'Active: 8947208 kB' 'Inactive: 4107900 kB' 'Active(anon): 8524916 kB' 'Inactive(anon): 0 kB' 'Active(file): 422292 kB' 'Inactive(file): 4107900 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523816 kB' 'Mapped: 199812 kB' 'Shmem: 8004428 kB' 'KReclaimable: 505792 kB' 'Slab: 1111032 kB' 'SReclaimable: 505792 kB' 'SUnreclaim: 605240 kB' 'KernelStack: 17456 kB' 'PageTables: 8400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482748 kB' 'Committed_AS: 9880836 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213064 kB' 'VmallocChunk: 0 kB' 'Percpu: 69984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 486840 kB' 'DirectMap2M: 7577600 kB' 'DirectMap1G: 94371840 kB' 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.675 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # surp=0 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.676 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285444 kB' 'MemFree: 74206648 kB' 'MemAvailable: 78406712 kB' 'Buffers: 9772 kB' 'Cached: 12524868 kB' 'SwapCached: 0 kB' 'Active: 8949820 kB' 'Inactive: 4107900 kB' 'Active(anon): 8527528 kB' 'Inactive(anon): 0 kB' 'Active(file): 422292 kB' 'Inactive(file): 4107900 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 526412 kB' 'Mapped: 200224 kB' 'Shmem: 8004448 kB' 'KReclaimable: 505792 kB' 'Slab: 1111032 kB' 'SReclaimable: 505792 kB' 'SUnreclaim: 605240 kB' 'KernelStack: 17440 kB' 'PageTables: 8372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482748 kB' 'Committed_AS: 9883244 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213084 kB' 'VmallocChunk: 0 kB' 'Percpu: 69984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 486840 kB' 'DirectMap2M: 7577600 kB' 'DirectMap1G: 94371840 kB' 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.677 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # resv=0 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:04:25.678 nr_hugepages=1024 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:04:25.678 resv_hugepages=0 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:04:25.678 surplus_hugepages=0 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:04:25.678 anon_hugepages=0 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.678 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285444 kB' 'MemFree: 74206648 kB' 'MemAvailable: 78406712 kB' 'Buffers: 9772 kB' 'Cached: 12524892 kB' 'SwapCached: 0 kB' 'Active: 8943600 kB' 'Inactive: 4107900 kB' 'Active(anon): 8521308 kB' 'Inactive(anon): 0 kB' 'Active(file): 422292 kB' 'Inactive(file): 4107900 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520152 kB' 'Mapped: 199720 kB' 'Shmem: 8004472 kB' 'KReclaimable: 505792 kB' 'Slab: 1111032 kB' 'SReclaimable: 505792 kB' 'SUnreclaim: 605240 kB' 'KernelStack: 17440 kB' 'PageTables: 8372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482748 kB' 'Committed_AS: 9877148 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213080 kB' 'VmallocChunk: 0 kB' 'Percpu: 69984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 486840 kB' 'DirectMap2M: 7577600 kB' 'DirectMap1G: 94371840 kB' 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.679 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@26 -- # local node 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=0 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48064864 kB' 'MemFree: 34021948 kB' 'MemUsed: 14042916 kB' 'SwapCached: 0 kB' 'Active: 6388320 kB' 'Inactive: 3878016 kB' 'Active(anon): 6178088 kB' 'Inactive(anon): 0 kB' 'Active(file): 210232 kB' 'Inactive(file): 3878016 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10047020 kB' 'Mapped: 99628 kB' 'AnonPages: 222636 kB' 'Shmem: 5958772 kB' 'KernelStack: 10184 kB' 'PageTables: 4796 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 219564 kB' 'Slab: 515300 kB' 'SReclaimable: 219564 kB' 'SUnreclaim: 295736 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.680 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.681 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.682 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.682 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.682 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.682 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.682 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:25.682 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:25.682 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:25.682 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:25.682 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:25.682 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:25.682 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # echo 'node0=1024 expecting 1024' 00:04:25.682 node0=1024 expecting 1024 00:04:25.682 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@129 -- # [[ 1024 == \1\0\2\4 ]] 00:04:25.682 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # CLEAR_HUGE=no 00:04:25.682 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # NRHUGE=512 00:04:25.682 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # HUGENODE=0 00:04:25.682 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # setup output 00:04:25.682 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:25.682 00:11:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:28.971 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:28.971 0000:1a:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:28.971 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:28.971 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:28.971 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:28.971 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:28.971 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:28.971 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:28.971 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:28.971 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:28.971 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:28.971 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:28.971 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:28.971 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:28.971 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:28.971 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:28.971 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:30.881 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:30.881 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@194 -- # verify_nr_hugepages 00:04:30.881 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@88 -- # local node 00:04:30.881 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:04:30.881 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:04:30.881 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local surp 00:04:30.881 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local resv 00:04:30.881 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local anon 00:04:30.881 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:30.881 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:04:30.881 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:30.881 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:30.881 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:30.881 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:30.881 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.881 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.881 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.881 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.881 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.881 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.881 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.881 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285444 kB' 'MemFree: 74189236 kB' 'MemAvailable: 78389188 kB' 'Buffers: 9772 kB' 'Cached: 12525024 kB' 'SwapCached: 0 kB' 'Active: 8945624 kB' 'Inactive: 4107900 kB' 'Active(anon): 8523332 kB' 'Inactive(anon): 0 kB' 'Active(file): 422292 kB' 'Inactive(file): 4107900 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522156 kB' 'Mapped: 198980 kB' 'Shmem: 8004604 kB' 'KReclaimable: 505680 kB' 'Slab: 1111540 kB' 'SReclaimable: 505680 kB' 'SUnreclaim: 605860 kB' 'KernelStack: 17568 kB' 'PageTables: 8852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482748 kB' 'Committed_AS: 9848468 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213192 kB' 'VmallocChunk: 0 kB' 'Percpu: 69984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 486840 kB' 'DirectMap2M: 7577600 kB' 'DirectMap1G: 94371840 kB' 00:04:30.881 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.881 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.881 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.881 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.882 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.883 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.883 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.883 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.883 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.883 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.883 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.883 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.883 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.883 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.883 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.883 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.883 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.883 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.883 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.883 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.883 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.883 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.883 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.883 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.883 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.883 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.883 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.883 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.883 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.883 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.883 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.883 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.883 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.883 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.883 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.883 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.883 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.883 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.883 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.883 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.883 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.883 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.883 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.883 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.883 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.883 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.883 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.883 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.883 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:30.883 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:30.883 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # anon=0 00:04:30.883 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:04:30.883 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:30.883 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:30.883 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:30.883 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:30.883 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.883 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.883 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.883 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.883 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.883 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.883 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.883 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285444 kB' 'MemFree: 74181668 kB' 'MemAvailable: 78381588 kB' 'Buffers: 9772 kB' 'Cached: 12525028 kB' 'SwapCached: 0 kB' 'Active: 8950368 kB' 'Inactive: 4107900 kB' 'Active(anon): 8528076 kB' 'Inactive(anon): 0 kB' 'Active(file): 422292 kB' 'Inactive(file): 4107900 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 526788 kB' 'Mapped: 198976 kB' 'Shmem: 8004608 kB' 'KReclaimable: 505648 kB' 'Slab: 1111492 kB' 'SReclaimable: 505648 kB' 'SUnreclaim: 605844 kB' 'KernelStack: 17552 kB' 'PageTables: 8756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482748 kB' 'Committed_AS: 9854180 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213196 kB' 'VmallocChunk: 0 kB' 'Percpu: 69984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 486840 kB' 'DirectMap2M: 7577600 kB' 'DirectMap1G: 94371840 kB' 00:04:30.883 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.883 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.883 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.883 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.883 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.883 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.883 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.883 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.883 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.883 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.883 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.883 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.883 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.883 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.883 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.883 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.883 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.883 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.883 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.883 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.883 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.883 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.883 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.883 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.883 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.883 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.883 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.883 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.883 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.883 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.883 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.883 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.883 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.883 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.883 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.883 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.883 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.883 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.883 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.883 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.883 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.883 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.883 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.883 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.883 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.883 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.884 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.885 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.885 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.885 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.885 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.885 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.885 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.885 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.885 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.885 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.885 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.885 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.885 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.885 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.885 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.885 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.885 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.885 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.885 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.885 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.885 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.885 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.885 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.885 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.885 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.885 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.885 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.885 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.885 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.885 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.885 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.885 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.885 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.885 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.885 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.885 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.885 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.885 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.885 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.885 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.885 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.885 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.885 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.885 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.885 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:30.885 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:30.885 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # surp=0 00:04:30.885 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:04:30.885 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:30.885 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:30.885 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:30.885 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:30.885 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.885 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.885 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.885 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.885 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.885 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.885 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.885 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285444 kB' 'MemFree: 74187464 kB' 'MemAvailable: 78387384 kB' 'Buffers: 9772 kB' 'Cached: 12525028 kB' 'SwapCached: 0 kB' 'Active: 8945768 kB' 'Inactive: 4107900 kB' 'Active(anon): 8523476 kB' 'Inactive(anon): 0 kB' 'Active(file): 422292 kB' 'Inactive(file): 4107900 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522212 kB' 'Mapped: 199292 kB' 'Shmem: 8004608 kB' 'KReclaimable: 505648 kB' 'Slab: 1111484 kB' 'SReclaimable: 505648 kB' 'SUnreclaim: 605836 kB' 'KernelStack: 17584 kB' 'PageTables: 8888 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482748 kB' 'Committed_AS: 9849304 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213176 kB' 'VmallocChunk: 0 kB' 'Percpu: 69984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 486840 kB' 'DirectMap2M: 7577600 kB' 'DirectMap1G: 94371840 kB' 00:04:30.885 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.885 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.885 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.885 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.885 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.885 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.885 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.885 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.885 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.885 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.885 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.885 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.885 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.885 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.885 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.885 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.885 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.885 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.885 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.885 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.885 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.885 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.885 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.885 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.885 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.885 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.885 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.885 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.885 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.885 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.885 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.885 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.886 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.887 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.887 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.887 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.887 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.887 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.887 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.887 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.887 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.887 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.887 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.887 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.887 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.887 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.887 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.887 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.887 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.887 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.887 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.887 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.887 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.887 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.887 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.887 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.887 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.887 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.887 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.887 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.887 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.887 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.887 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.887 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.887 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.887 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.887 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.887 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.887 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.887 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.887 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.887 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.887 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.887 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.887 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.887 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.887 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.887 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.887 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.887 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.887 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.887 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.887 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.887 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.887 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.887 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.887 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.887 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.887 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.887 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.887 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.887 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.887 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.887 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.887 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:30.887 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:30.887 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # resv=0 00:04:30.887 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:04:30.887 nr_hugepages=1024 00:04:30.887 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:04:30.887 resv_hugepages=0 00:04:30.887 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:04:30.887 surplus_hugepages=0 00:04:30.887 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:04:30.887 anon_hugepages=0 00:04:30.887 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:30.887 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:04:30.887 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:04:30.887 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:30.887 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:30.887 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:30.887 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:30.887 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.887 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.887 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.887 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.887 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.887 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285444 kB' 'MemFree: 74180792 kB' 'MemAvailable: 78380712 kB' 'Buffers: 9772 kB' 'Cached: 12525032 kB' 'SwapCached: 0 kB' 'Active: 8947128 kB' 'Inactive: 4107900 kB' 'Active(anon): 8524836 kB' 'Inactive(anon): 0 kB' 'Active(file): 422292 kB' 'Inactive(file): 4107900 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523588 kB' 'Mapped: 198976 kB' 'Shmem: 8004612 kB' 'KReclaimable: 505648 kB' 'Slab: 1111484 kB' 'SReclaimable: 505648 kB' 'SUnreclaim: 605836 kB' 'KernelStack: 17552 kB' 'PageTables: 8756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482748 kB' 'Committed_AS: 9851440 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213176 kB' 'VmallocChunk: 0 kB' 'Percpu: 69984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 486840 kB' 'DirectMap2M: 7577600 kB' 'DirectMap1G: 94371840 kB' 00:04:30.887 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.887 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.887 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.887 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.887 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.887 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.887 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.887 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.887 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.887 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.887 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.887 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.887 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.887 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.887 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.887 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.887 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.887 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.887 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.887 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.887 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.887 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.887 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.888 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@26 -- # local node 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=0 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48064864 kB' 'MemFree: 34015728 kB' 'MemUsed: 14049136 kB' 'SwapCached: 0 kB' 'Active: 6387652 kB' 'Inactive: 3878016 kB' 'Active(anon): 6177420 kB' 'Inactive(anon): 0 kB' 'Active(file): 210232 kB' 'Inactive(file): 3878016 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10047148 kB' 'Mapped: 99620 kB' 'AnonPages: 221792 kB' 'Shmem: 5958900 kB' 'KernelStack: 10248 kB' 'PageTables: 4936 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 219420 kB' 'Slab: 515160 kB' 'SReclaimable: 219420 kB' 'SUnreclaim: 295740 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.889 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.890 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.890 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.890 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.890 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.890 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.890 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.890 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.890 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.890 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.890 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.890 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.890 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.890 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.890 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.890 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.890 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.890 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.890 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.890 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.890 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.890 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.890 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.890 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.890 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.890 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.890 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.890 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.890 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.890 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.890 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.890 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.890 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.890 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.890 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.890 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.890 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.890 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.890 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.890 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.890 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.890 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.890 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.890 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.890 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.890 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.890 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.890 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.890 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.890 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.890 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.890 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.890 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.890 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.890 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.890 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.890 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.890 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.890 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.890 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.890 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.890 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.890 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.890 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.890 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.890 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.890 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.890 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.890 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.890 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.890 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.890 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.890 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.890 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.890 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.890 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.890 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.890 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.890 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.890 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.890 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.890 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.890 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.890 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.890 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.890 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.890 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.890 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.890 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.890 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.890 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.890 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.890 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.890 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.890 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.890 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.890 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.890 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.890 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.890 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.890 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.890 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.890 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.890 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.890 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.891 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.891 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.891 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.891 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.891 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.891 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.891 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.891 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.891 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.891 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.891 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.891 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.891 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.891 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:30.891 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:30.891 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:30.891 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:30.891 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:30.891 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:30.891 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # echo 'node0=1024 expecting 1024' 00:04:30.891 node0=1024 expecting 1024 00:04:31.150 00:12:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@129 -- # [[ 1024 == \1\0\2\4 ]] 00:04:31.150 00:04:31.150 real 0m10.702s 00:04:31.150 user 0m3.488s 00:04:31.150 sys 0m7.057s 00:04:31.150 00:12:01 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:31.150 00:12:01 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:31.150 ************************************ 00:04:31.150 END TEST no_shrink_alloc 00:04:31.150 ************************************ 00:04:31.150 00:12:01 setup.sh.hugepages -- setup/hugepages.sh@206 -- # clear_hp 00:04:31.150 00:12:01 setup.sh.hugepages -- setup/hugepages.sh@36 -- # local node hp 00:04:31.150 00:12:01 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:04:31.150 00:12:01 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:31.150 00:12:01 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:04:31.150 00:12:01 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:31.150 00:12:01 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:04:31.150 00:12:01 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:04:31.150 00:12:01 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:31.150 00:12:01 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:04:31.150 00:12:01 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:31.150 00:12:01 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:04:31.150 00:12:01 setup.sh.hugepages -- setup/hugepages.sh@44 -- # export CLEAR_HUGE=yes 00:04:31.150 00:12:01 setup.sh.hugepages -- setup/hugepages.sh@44 -- # CLEAR_HUGE=yes 00:04:31.150 00:04:31.150 real 0m37.853s 00:04:31.150 user 0m11.640s 00:04:31.150 sys 0m22.764s 00:04:31.150 00:12:01 setup.sh.hugepages -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:31.150 00:12:01 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:31.150 ************************************ 00:04:31.150 END TEST hugepages 00:04:31.150 ************************************ 00:04:31.150 00:12:01 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:04:31.150 00:12:01 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:31.150 00:12:01 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:31.150 00:12:01 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:31.150 ************************************ 00:04:31.150 START TEST driver 00:04:31.150 ************************************ 00:04:31.150 00:12:01 setup.sh.driver -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:04:31.150 * Looking for test storage... 00:04:31.150 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:04:31.150 00:12:01 setup.sh.driver -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:31.150 00:12:01 setup.sh.driver -- common/autotest_common.sh@1681 -- # lcov --version 00:04:31.150 00:12:01 setup.sh.driver -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:31.409 00:12:01 setup.sh.driver -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:31.409 00:12:01 setup.sh.driver -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:31.409 00:12:01 setup.sh.driver -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:31.409 00:12:01 setup.sh.driver -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:31.409 00:12:01 setup.sh.driver -- scripts/common.sh@336 -- # IFS=.-: 00:04:31.409 00:12:01 setup.sh.driver -- scripts/common.sh@336 -- # read -ra ver1 00:04:31.409 00:12:01 setup.sh.driver -- scripts/common.sh@337 -- # IFS=.-: 00:04:31.409 00:12:01 setup.sh.driver -- scripts/common.sh@337 -- # read -ra ver2 00:04:31.409 00:12:01 setup.sh.driver -- scripts/common.sh@338 -- # local 'op=<' 00:04:31.409 00:12:01 setup.sh.driver -- scripts/common.sh@340 -- # ver1_l=2 00:04:31.409 00:12:01 setup.sh.driver -- scripts/common.sh@341 -- # ver2_l=1 00:04:31.409 00:12:01 setup.sh.driver -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:31.409 00:12:01 setup.sh.driver -- scripts/common.sh@344 -- # case "$op" in 00:04:31.409 00:12:01 setup.sh.driver -- scripts/common.sh@345 -- # : 1 00:04:31.409 00:12:01 setup.sh.driver -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:31.409 00:12:01 setup.sh.driver -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:31.409 00:12:01 setup.sh.driver -- scripts/common.sh@365 -- # decimal 1 00:04:31.409 00:12:01 setup.sh.driver -- scripts/common.sh@353 -- # local d=1 00:04:31.409 00:12:01 setup.sh.driver -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:31.409 00:12:01 setup.sh.driver -- scripts/common.sh@355 -- # echo 1 00:04:31.409 00:12:01 setup.sh.driver -- scripts/common.sh@365 -- # ver1[v]=1 00:04:31.409 00:12:01 setup.sh.driver -- scripts/common.sh@366 -- # decimal 2 00:04:31.409 00:12:01 setup.sh.driver -- scripts/common.sh@353 -- # local d=2 00:04:31.409 00:12:01 setup.sh.driver -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:31.409 00:12:01 setup.sh.driver -- scripts/common.sh@355 -- # echo 2 00:04:31.409 00:12:01 setup.sh.driver -- scripts/common.sh@366 -- # ver2[v]=2 00:04:31.409 00:12:01 setup.sh.driver -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:31.409 00:12:01 setup.sh.driver -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:31.409 00:12:01 setup.sh.driver -- scripts/common.sh@368 -- # return 0 00:04:31.409 00:12:01 setup.sh.driver -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:31.409 00:12:01 setup.sh.driver -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:31.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.409 --rc genhtml_branch_coverage=1 00:04:31.409 --rc genhtml_function_coverage=1 00:04:31.409 --rc genhtml_legend=1 00:04:31.409 --rc geninfo_all_blocks=1 00:04:31.409 --rc geninfo_unexecuted_blocks=1 00:04:31.409 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:31.409 ' 00:04:31.409 00:12:01 setup.sh.driver -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:31.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.409 --rc genhtml_branch_coverage=1 00:04:31.409 --rc genhtml_function_coverage=1 00:04:31.409 --rc genhtml_legend=1 00:04:31.409 --rc geninfo_all_blocks=1 00:04:31.410 --rc geninfo_unexecuted_blocks=1 00:04:31.410 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:31.410 ' 00:04:31.410 00:12:01 setup.sh.driver -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:31.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.410 --rc genhtml_branch_coverage=1 00:04:31.410 --rc genhtml_function_coverage=1 00:04:31.410 --rc genhtml_legend=1 00:04:31.410 --rc geninfo_all_blocks=1 00:04:31.410 --rc geninfo_unexecuted_blocks=1 00:04:31.410 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:31.410 ' 00:04:31.410 00:12:01 setup.sh.driver -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:31.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.410 --rc genhtml_branch_coverage=1 00:04:31.410 --rc genhtml_function_coverage=1 00:04:31.410 --rc genhtml_legend=1 00:04:31.410 --rc geninfo_all_blocks=1 00:04:31.410 --rc geninfo_unexecuted_blocks=1 00:04:31.410 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:31.410 ' 00:04:31.410 00:12:01 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:31.410 00:12:01 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:31.410 00:12:01 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:38.082 00:12:08 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:38.082 00:12:08 setup.sh.driver -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:38.082 00:12:08 setup.sh.driver -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:38.082 00:12:08 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:38.082 ************************************ 00:04:38.082 START TEST guess_driver 00:04:38.082 ************************************ 00:04:38.082 00:12:08 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # guess_driver 00:04:38.082 00:12:08 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:38.082 00:12:08 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:38.082 00:12:08 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:38.082 00:12:08 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:38.082 00:12:08 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:38.082 00:12:08 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:38.082 00:12:08 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:38.082 00:12:08 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:38.082 00:12:08 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:38.082 00:12:08 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 190 > 0 )) 00:04:38.082 00:12:08 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:38.083 00:12:08 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:38.083 00:12:08 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:38.083 00:12:08 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:38.083 00:12:08 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:38.083 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:38.083 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:38.083 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:38.083 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:38.083 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:38.083 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:38.083 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:38.083 00:12:08 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:38.083 00:12:08 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:38.083 00:12:08 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:38.083 00:12:08 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:38.083 00:12:08 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:38.083 Looking for driver=vfio-pci 00:04:38.083 00:12:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:38.083 00:12:08 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:38.083 00:12:08 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:38.083 00:12:08 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:41.372 00:12:11 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:41.372 00:12:11 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:41.372 00:12:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:41.372 00:12:11 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:41.372 00:12:11 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:41.372 00:12:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:41.372 00:12:11 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:41.372 00:12:11 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:41.372 00:12:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:41.372 00:12:11 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:41.372 00:12:11 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:41.372 00:12:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:41.372 00:12:11 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:41.372 00:12:11 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:41.372 00:12:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:41.372 00:12:11 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:41.372 00:12:11 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:41.372 00:12:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:41.372 00:12:11 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:41.372 00:12:11 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:41.372 00:12:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:41.372 00:12:11 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:41.372 00:12:11 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:41.372 00:12:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:41.372 00:12:11 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:41.372 00:12:11 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:41.372 00:12:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:41.372 00:12:11 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:41.372 00:12:11 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:41.372 00:12:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:41.372 00:12:11 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:41.372 00:12:11 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:41.372 00:12:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:41.372 00:12:11 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:41.372 00:12:11 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:41.372 00:12:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:41.372 00:12:11 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:41.372 00:12:11 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:41.372 00:12:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:41.372 00:12:11 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:41.372 00:12:11 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:41.372 00:12:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:41.372 00:12:11 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:41.372 00:12:11 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:41.372 00:12:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:41.372 00:12:11 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:41.373 00:12:11 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:41.373 00:12:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:44.674 00:12:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:44.674 00:12:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:44.674 00:12:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:47.205 00:12:17 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:47.205 00:12:17 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:47.205 00:12:17 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:47.205 00:12:17 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:53.773 00:04:53.773 real 0m15.395s 00:04:53.773 user 0m3.736s 00:04:53.773 sys 0m7.769s 00:04:53.773 00:12:24 setup.sh.driver.guess_driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:53.773 00:12:24 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:53.773 ************************************ 00:04:53.773 END TEST guess_driver 00:04:53.773 ************************************ 00:04:53.773 00:04:53.773 real 0m22.417s 00:04:53.773 user 0m5.792s 00:04:53.773 sys 0m11.955s 00:04:53.773 00:12:24 setup.sh.driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:53.773 00:12:24 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:53.773 ************************************ 00:04:53.773 END TEST driver 00:04:53.773 ************************************ 00:04:53.773 00:12:24 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:04:53.773 00:12:24 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:53.773 00:12:24 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:53.773 00:12:24 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:53.773 ************************************ 00:04:53.773 START TEST devices 00:04:53.773 ************************************ 00:04:53.774 00:12:24 setup.sh.devices -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:04:53.774 * Looking for test storage... 00:04:53.774 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:04:53.774 00:12:24 setup.sh.devices -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:53.774 00:12:24 setup.sh.devices -- common/autotest_common.sh@1681 -- # lcov --version 00:04:53.774 00:12:24 setup.sh.devices -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:53.774 00:12:24 setup.sh.devices -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:53.774 00:12:24 setup.sh.devices -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:53.774 00:12:24 setup.sh.devices -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:53.774 00:12:24 setup.sh.devices -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:53.774 00:12:24 setup.sh.devices -- scripts/common.sh@336 -- # IFS=.-: 00:04:53.774 00:12:24 setup.sh.devices -- scripts/common.sh@336 -- # read -ra ver1 00:04:53.774 00:12:24 setup.sh.devices -- scripts/common.sh@337 -- # IFS=.-: 00:04:53.774 00:12:24 setup.sh.devices -- scripts/common.sh@337 -- # read -ra ver2 00:04:53.774 00:12:24 setup.sh.devices -- scripts/common.sh@338 -- # local 'op=<' 00:04:53.774 00:12:24 setup.sh.devices -- scripts/common.sh@340 -- # ver1_l=2 00:04:53.774 00:12:24 setup.sh.devices -- scripts/common.sh@341 -- # ver2_l=1 00:04:53.774 00:12:24 setup.sh.devices -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:53.774 00:12:24 setup.sh.devices -- scripts/common.sh@344 -- # case "$op" in 00:04:53.774 00:12:24 setup.sh.devices -- scripts/common.sh@345 -- # : 1 00:04:53.774 00:12:24 setup.sh.devices -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:53.774 00:12:24 setup.sh.devices -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:53.774 00:12:24 setup.sh.devices -- scripts/common.sh@365 -- # decimal 1 00:04:53.774 00:12:24 setup.sh.devices -- scripts/common.sh@353 -- # local d=1 00:04:53.774 00:12:24 setup.sh.devices -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:53.774 00:12:24 setup.sh.devices -- scripts/common.sh@355 -- # echo 1 00:04:53.774 00:12:24 setup.sh.devices -- scripts/common.sh@365 -- # ver1[v]=1 00:04:53.774 00:12:24 setup.sh.devices -- scripts/common.sh@366 -- # decimal 2 00:04:53.774 00:12:24 setup.sh.devices -- scripts/common.sh@353 -- # local d=2 00:04:53.774 00:12:24 setup.sh.devices -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:53.774 00:12:24 setup.sh.devices -- scripts/common.sh@355 -- # echo 2 00:04:53.774 00:12:24 setup.sh.devices -- scripts/common.sh@366 -- # ver2[v]=2 00:04:53.774 00:12:24 setup.sh.devices -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:53.774 00:12:24 setup.sh.devices -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:53.774 00:12:24 setup.sh.devices -- scripts/common.sh@368 -- # return 0 00:04:53.774 00:12:24 setup.sh.devices -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:53.774 00:12:24 setup.sh.devices -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:53.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.774 --rc genhtml_branch_coverage=1 00:04:53.774 --rc genhtml_function_coverage=1 00:04:53.774 --rc genhtml_legend=1 00:04:53.774 --rc geninfo_all_blocks=1 00:04:53.774 --rc geninfo_unexecuted_blocks=1 00:04:53.774 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:53.774 ' 00:04:53.774 00:12:24 setup.sh.devices -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:53.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.774 --rc genhtml_branch_coverage=1 00:04:53.774 --rc genhtml_function_coverage=1 00:04:53.774 --rc genhtml_legend=1 00:04:53.774 --rc geninfo_all_blocks=1 00:04:53.774 --rc geninfo_unexecuted_blocks=1 00:04:53.774 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:53.774 ' 00:04:53.774 00:12:24 setup.sh.devices -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:53.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.774 --rc genhtml_branch_coverage=1 00:04:53.774 --rc genhtml_function_coverage=1 00:04:53.774 --rc genhtml_legend=1 00:04:53.774 --rc geninfo_all_blocks=1 00:04:53.774 --rc geninfo_unexecuted_blocks=1 00:04:53.774 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:53.774 ' 00:04:53.774 00:12:24 setup.sh.devices -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:53.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.774 --rc genhtml_branch_coverage=1 00:04:53.774 --rc genhtml_function_coverage=1 00:04:53.774 --rc genhtml_legend=1 00:04:53.774 --rc geninfo_all_blocks=1 00:04:53.774 --rc geninfo_unexecuted_blocks=1 00:04:53.774 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:53.774 ' 00:04:53.774 00:12:24 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:53.774 00:12:24 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:53.774 00:12:24 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:53.774 00:12:24 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:05:00.334 00:12:30 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:05:00.334 00:12:30 setup.sh.devices -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:05:00.334 00:12:30 setup.sh.devices -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:05:00.334 00:12:30 setup.sh.devices -- common/autotest_common.sh@1656 -- # local nvme bdf 00:05:00.334 00:12:30 setup.sh.devices -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:00.334 00:12:30 setup.sh.devices -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:05:00.334 00:12:30 setup.sh.devices -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:05:00.334 00:12:30 setup.sh.devices -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:00.334 00:12:30 setup.sh.devices -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:00.334 00:12:30 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:05:00.334 00:12:30 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:05:00.334 00:12:30 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:00.334 00:12:30 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:00.334 00:12:30 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:00.334 00:12:30 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:00.334 00:12:30 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:00.334 00:12:30 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:00.334 00:12:30 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:1a:00.0 00:05:00.334 00:12:30 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\1\a\:\0\0\.\0* ]] 00:05:00.334 00:12:30 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:00.334 00:12:30 setup.sh.devices -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:05:00.334 00:12:30 setup.sh.devices -- scripts/common.sh@390 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:05:00.334 No valid GPT data, bailing 00:05:00.334 00:12:30 setup.sh.devices -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:00.334 00:12:30 setup.sh.devices -- scripts/common.sh@394 -- # pt= 00:05:00.334 00:12:30 setup.sh.devices -- scripts/common.sh@395 -- # return 1 00:05:00.334 00:12:30 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:00.334 00:12:30 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:00.334 00:12:30 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:00.334 00:12:30 setup.sh.devices -- setup/common.sh@80 -- # echo 4000787030016 00:05:00.334 00:12:30 setup.sh.devices -- setup/devices.sh@204 -- # (( 4000787030016 >= min_disk_size )) 00:05:00.334 00:12:30 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:00.334 00:12:30 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:1a:00.0 00:05:00.334 00:12:30 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:05:00.334 00:12:30 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:00.334 00:12:30 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:00.334 00:12:30 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:00.334 00:12:30 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:00.334 00:12:30 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:00.334 ************************************ 00:05:00.334 START TEST nvme_mount 00:05:00.334 ************************************ 00:05:00.334 00:12:30 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # nvme_mount 00:05:00.334 00:12:30 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:00.334 00:12:30 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:00.334 00:12:30 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:00.334 00:12:30 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:00.334 00:12:30 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:00.334 00:12:30 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:00.334 00:12:30 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:05:00.334 00:12:30 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:00.334 00:12:30 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:00.334 00:12:30 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:05:00.334 00:12:30 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:05:00.334 00:12:30 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:00.334 00:12:30 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:00.334 00:12:30 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:00.334 00:12:30 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:00.334 00:12:30 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:00.334 00:12:30 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:00.334 00:12:30 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:00.334 00:12:30 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:00.901 Creating new GPT entries in memory. 00:05:00.901 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:00.901 other utilities. 00:05:00.901 00:12:31 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:00.901 00:12:31 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:00.901 00:12:31 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:00.901 00:12:31 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:00.901 00:12:31 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:01.982 Creating new GPT entries in memory. 00:05:01.982 The operation has completed successfully. 00:05:01.982 00:12:32 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:01.982 00:12:32 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:01.982 00:12:32 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 3855709 00:05:01.982 00:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:01.982 00:12:32 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size= 00:05:01.982 00:12:32 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:01.982 00:12:32 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:01.982 00:12:32 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:01.982 00:12:32 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:01.982 00:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:1a:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:01.982 00:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:1a:00.0 00:05:01.982 00:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:01.982 00:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:01.982 00:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:01.982 00:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:01.982 00:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:01.982 00:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:01.982 00:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:01.982 00:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.982 00:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:1a:00.0 00:05:01.982 00:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:01.982 00:12:32 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:01.982 00:12:32 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:05:04.524 00:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:1a:00.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:04.524 00:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:04.524 00:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:04.524 00:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.524 00:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:04.524 00:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.524 00:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:04.524 00:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.524 00:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:04.524 00:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.524 00:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:04.524 00:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.524 00:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:04.524 00:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.524 00:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:04.524 00:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.524 00:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:04.524 00:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.524 00:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:04.524 00:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.524 00:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:04.524 00:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.524 00:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:04.524 00:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.524 00:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:04.524 00:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.524 00:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:04.524 00:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.524 00:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:04.524 00:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.524 00:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:04.524 00:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.524 00:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:04.524 00:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.524 00:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:04.524 00:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.065 00:12:37 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:07.065 00:12:37 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:07.065 00:12:37 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:07.065 00:12:37 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:07.065 00:12:37 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:07.065 00:12:37 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:05:07.065 00:12:37 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:07.065 00:12:37 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:07.065 00:12:37 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:07.065 00:12:37 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:07.065 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:07.065 00:12:37 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:07.065 00:12:37 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:07.065 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:07.065 /dev/nvme0n1: 8 bytes were erased at offset 0x3a3817d5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:07.065 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:07.065 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:07.065 00:12:37 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:05:07.065 00:12:37 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:05:07.065 00:12:37 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:07.065 00:12:37 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:07.065 00:12:37 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:07.065 00:12:37 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:07.065 00:12:37 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:1a:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:07.065 00:12:37 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:1a:00.0 00:05:07.065 00:12:37 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:07.065 00:12:37 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:07.065 00:12:37 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:07.065 00:12:37 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:07.065 00:12:37 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:07.065 00:12:37 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:07.065 00:12:37 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:07.065 00:12:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.065 00:12:37 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:1a:00.0 00:05:07.065 00:12:37 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:07.065 00:12:37 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:07.065 00:12:37 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:05:10.351 00:12:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:1a:00.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:10.351 00:12:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:10.351 00:12:40 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:10.351 00:12:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.351 00:12:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:10.351 00:12:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.351 00:12:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:10.351 00:12:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.351 00:12:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:10.351 00:12:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.351 00:12:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:10.351 00:12:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.351 00:12:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:10.351 00:12:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.351 00:12:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:10.351 00:12:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.351 00:12:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:10.351 00:12:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.351 00:12:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:10.351 00:12:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.351 00:12:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:10.351 00:12:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.351 00:12:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:10.351 00:12:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.351 00:12:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:10.351 00:12:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.351 00:12:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:10.351 00:12:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.351 00:12:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:10.351 00:12:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.351 00:12:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:10.351 00:12:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.351 00:12:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:10.351 00:12:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.351 00:12:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:10.351 00:12:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.255 00:12:42 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:12.255 00:12:42 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:12.255 00:12:42 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:12.255 00:12:42 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:12.255 00:12:42 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:12.255 00:12:42 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:12.255 00:12:42 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:1a:00.0 data@nvme0n1 '' '' 00:05:12.255 00:12:42 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:1a:00.0 00:05:12.255 00:12:42 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:12.255 00:12:42 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:12.255 00:12:42 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:05:12.255 00:12:42 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:12.255 00:12:42 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:12.255 00:12:42 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:12.255 00:12:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.255 00:12:42 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:1a:00.0 00:05:12.255 00:12:42 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:12.255 00:12:42 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:12.255 00:12:42 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:05:15.544 00:12:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:1a:00.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:15.544 00:12:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:15.544 00:12:46 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:15.544 00:12:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.544 00:12:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:15.544 00:12:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.544 00:12:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:15.544 00:12:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.544 00:12:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:15.544 00:12:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.544 00:12:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:15.544 00:12:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.544 00:12:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:15.544 00:12:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.544 00:12:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:15.544 00:12:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.544 00:12:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:15.544 00:12:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.544 00:12:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:15.544 00:12:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.544 00:12:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:15.544 00:12:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.544 00:12:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:15.544 00:12:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.544 00:12:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:15.544 00:12:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.544 00:12:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:15.544 00:12:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.544 00:12:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:15.544 00:12:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.544 00:12:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:15.544 00:12:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.544 00:12:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:15.544 00:12:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.544 00:12:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:15.544 00:12:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.078 00:12:48 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:18.078 00:12:48 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:18.078 00:12:48 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:18.078 00:12:48 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:18.078 00:12:48 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:18.078 00:12:48 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:18.078 00:12:48 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:18.078 00:12:48 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:18.078 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:18.078 00:05:18.078 real 0m18.022s 00:05:18.078 user 0m5.068s 00:05:18.078 sys 0m10.607s 00:05:18.078 00:12:48 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:18.078 00:12:48 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:18.078 ************************************ 00:05:18.078 END TEST nvme_mount 00:05:18.078 ************************************ 00:05:18.078 00:12:48 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:18.078 00:12:48 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:18.078 00:12:48 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:18.078 00:12:48 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:18.078 ************************************ 00:05:18.078 START TEST dm_mount 00:05:18.078 ************************************ 00:05:18.078 00:12:48 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # dm_mount 00:05:18.078 00:12:48 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:18.078 00:12:48 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:18.078 00:12:48 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:18.078 00:12:48 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:18.078 00:12:48 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:18.078 00:12:48 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:18.078 00:12:48 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:18.078 00:12:48 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:18.078 00:12:48 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:18.078 00:12:48 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:18.078 00:12:48 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:18.078 00:12:48 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:18.078 00:12:48 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:18.078 00:12:48 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:18.078 00:12:48 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:18.078 00:12:48 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:18.078 00:12:48 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:18.078 00:12:48 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:18.078 00:12:48 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:18.078 00:12:48 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:18.078 00:12:48 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:19.013 Creating new GPT entries in memory. 00:05:19.013 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:19.013 other utilities. 00:05:19.013 00:12:49 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:19.013 00:12:49 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:19.013 00:12:49 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:19.013 00:12:49 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:19.013 00:12:49 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:19.948 Creating new GPT entries in memory. 00:05:19.948 The operation has completed successfully. 00:05:19.948 00:12:50 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:19.948 00:12:50 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:19.948 00:12:50 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:19.948 00:12:50 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:19.948 00:12:50 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:05:20.885 The operation has completed successfully. 00:05:20.885 00:12:51 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:20.885 00:12:51 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:20.885 00:12:51 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 3860434 00:05:21.146 00:12:51 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:21.146 00:12:51 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:21.146 00:12:51 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:21.146 00:12:51 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:21.146 00:12:51 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:21.146 00:12:51 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:21.146 00:12:51 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:21.146 00:12:51 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:21.146 00:12:51 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:21.146 00:12:51 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:21.146 00:12:51 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:05:21.146 00:12:51 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:21.146 00:12:51 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:21.146 00:12:51 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:21.146 00:12:51 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount size= 00:05:21.147 00:12:51 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:21.147 00:12:51 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:21.147 00:12:51 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:21.147 00:12:51 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:21.147 00:12:51 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:1a:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:21.147 00:12:51 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:1a:00.0 00:05:21.147 00:12:51 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:21.147 00:12:51 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:21.147 00:12:51 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:21.147 00:12:51 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:21.147 00:12:51 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:21.147 00:12:51 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:21.147 00:12:51 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:21.147 00:12:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.147 00:12:51 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:1a:00.0 00:05:21.147 00:12:51 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:21.147 00:12:51 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:21.147 00:12:51 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:05:24.440 00:12:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:1a:00.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:24.440 00:12:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:24.440 00:12:54 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:24.440 00:12:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.440 00:12:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:24.440 00:12:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.440 00:12:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:24.440 00:12:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.440 00:12:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:24.440 00:12:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.440 00:12:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:24.440 00:12:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.440 00:12:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:24.440 00:12:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.440 00:12:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:24.440 00:12:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.440 00:12:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:24.440 00:12:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.440 00:12:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:24.440 00:12:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.440 00:12:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:24.440 00:12:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.440 00:12:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:24.440 00:12:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.440 00:12:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:24.440 00:12:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.440 00:12:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:24.440 00:12:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.440 00:12:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:24.440 00:12:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.440 00:12:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:24.440 00:12:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.440 00:12:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:24.440 00:12:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.440 00:12:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:24.440 00:12:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.983 00:12:57 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:26.983 00:12:57 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:26.983 00:12:57 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:26.983 00:12:57 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:26.983 00:12:57 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:26.983 00:12:57 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:26.983 00:12:57 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:1a:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:26.983 00:12:57 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:1a:00.0 00:05:26.983 00:12:57 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:26.983 00:12:57 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:26.983 00:12:57 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:26.983 00:12:57 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:26.983 00:12:57 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:26.983 00:12:57 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:26.983 00:12:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.983 00:12:57 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:1a:00.0 00:05:26.983 00:12:57 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:26.984 00:12:57 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:26.984 00:12:57 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:05:29.518 00:13:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:1a:00.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:29.518 00:13:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:29.518 00:13:00 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:29.518 00:13:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.518 00:13:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:29.518 00:13:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.518 00:13:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:29.518 00:13:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.518 00:13:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:29.518 00:13:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.518 00:13:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:29.518 00:13:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.518 00:13:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:29.518 00:13:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.518 00:13:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:29.518 00:13:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.518 00:13:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:29.518 00:13:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.518 00:13:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:29.518 00:13:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.518 00:13:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:29.518 00:13:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.518 00:13:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:29.518 00:13:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.518 00:13:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:29.518 00:13:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.518 00:13:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:29.518 00:13:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.518 00:13:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:29.518 00:13:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.518 00:13:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:29.518 00:13:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.518 00:13:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:29.518 00:13:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.518 00:13:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:29.518 00:13:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.066 00:13:02 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:32.066 00:13:02 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:32.066 00:13:02 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:32.066 00:13:02 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:32.066 00:13:02 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:32.066 00:13:02 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:32.066 00:13:02 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:32.066 00:13:02 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:32.066 00:13:02 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:32.066 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:32.066 00:13:02 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:32.066 00:13:02 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:32.066 00:05:32.066 real 0m13.918s 00:05:32.066 user 0m3.520s 00:05:32.066 sys 0m7.316s 00:05:32.066 00:13:02 setup.sh.devices.dm_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:32.066 00:13:02 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:32.066 ************************************ 00:05:32.066 END TEST dm_mount 00:05:32.066 ************************************ 00:05:32.066 00:13:02 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:32.066 00:13:02 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:32.066 00:13:02 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:32.066 00:13:02 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:32.066 00:13:02 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:32.066 00:13:02 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:32.066 00:13:02 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:32.066 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:32.066 /dev/nvme0n1: 8 bytes were erased at offset 0x3a3817d5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:32.066 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:32.066 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:32.066 00:13:02 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:32.066 00:13:02 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:32.066 00:13:02 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:32.066 00:13:02 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:32.066 00:13:02 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:32.066 00:13:02 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:32.066 00:13:02 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:32.066 00:05:32.066 real 0m38.514s 00:05:32.066 user 0m10.620s 00:05:32.066 sys 0m22.079s 00:05:32.066 00:13:02 setup.sh.devices -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:32.066 00:13:02 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:32.066 ************************************ 00:05:32.066 END TEST devices 00:05:32.066 ************************************ 00:05:32.324 00:05:32.324 real 2m16.030s 00:05:32.324 user 0m38.929s 00:05:32.324 sys 1m19.079s 00:05:32.324 00:13:02 setup.sh -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:32.324 00:13:02 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:32.324 ************************************ 00:05:32.324 END TEST setup.sh 00:05:32.324 ************************************ 00:05:32.325 00:13:02 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:05:35.607 Hugepages 00:05:35.607 node hugesize free / total 00:05:35.607 node0 1048576kB 0 / 0 00:05:35.607 node0 2048kB 1024 / 1024 00:05:35.607 node1 1048576kB 0 / 0 00:05:35.607 node1 2048kB 1024 / 1024 00:05:35.607 00:05:35.607 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:35.607 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:05:35.607 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:05:35.607 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:05:35.607 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:05:35.607 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:05:35.607 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:05:35.607 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:05:35.607 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:05:35.607 NVMe 0000:1a:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:05:35.607 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:05:35.607 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:05:35.607 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:05:35.607 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:05:35.607 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:05:35.607 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:05:35.607 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:05:35.607 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:05:35.607 00:13:06 -- spdk/autotest.sh@117 -- # uname -s 00:05:35.607 00:13:06 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:35.607 00:13:06 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:35.607 00:13:06 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:05:38.892 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:38.892 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:38.892 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:38.892 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:38.892 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:38.892 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:38.892 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:38.892 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:38.892 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:38.892 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:38.892 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:38.892 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:38.892 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:38.892 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:38.892 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:38.892 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:42.176 0000:1a:00.0 (8086 0a54): nvme -> vfio-pci 00:05:44.079 00:13:14 -- common/autotest_common.sh@1515 -- # sleep 1 00:05:45.014 00:13:15 -- common/autotest_common.sh@1516 -- # bdfs=() 00:05:45.014 00:13:15 -- common/autotest_common.sh@1516 -- # local bdfs 00:05:45.014 00:13:15 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:05:45.014 00:13:15 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:05:45.014 00:13:15 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:45.014 00:13:15 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:45.014 00:13:15 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:45.014 00:13:15 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:45.014 00:13:15 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:45.014 00:13:15 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:05:45.014 00:13:15 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:1a:00.0 00:05:45.014 00:13:15 -- common/autotest_common.sh@1520 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:05:48.303 Waiting for block devices as requested 00:05:48.303 0000:1a:00.0 (8086 0a54): vfio-pci -> nvme 00:05:48.303 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:48.561 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:48.561 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:48.561 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:48.561 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:48.820 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:48.820 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:48.820 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:49.079 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:49.079 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:49.079 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:49.337 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:49.337 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:49.337 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:49.337 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:49.595 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:52.124 00:13:22 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:52.124 00:13:22 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:1a:00.0 00:05:52.124 00:13:22 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 00:05:52.124 00:13:22 -- common/autotest_common.sh@1485 -- # grep 0000:1a:00.0/nvme/nvme 00:05:52.124 00:13:22 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:17/0000:17:00.0/0000:18:00.0/0000:19:00.0/0000:1a:00.0/nvme/nvme0 00:05:52.124 00:13:22 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:17/0000:17:00.0/0000:18:00.0/0000:19:00.0/0000:1a:00.0/nvme/nvme0 ]] 00:05:52.124 00:13:22 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:17/0000:17:00.0/0000:18:00.0/0000:19:00.0/0000:1a:00.0/nvme/nvme0 00:05:52.124 00:13:22 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:05:52.124 00:13:22 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:05:52.124 00:13:22 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:05:52.124 00:13:22 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:05:52.124 00:13:22 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:52.124 00:13:22 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:52.124 00:13:22 -- common/autotest_common.sh@1529 -- # oacs=' 0xe' 00:05:52.124 00:13:22 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:52.124 00:13:22 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:52.124 00:13:22 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:05:52.124 00:13:22 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:52.124 00:13:22 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:52.124 00:13:22 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:52.124 00:13:22 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:52.124 00:13:22 -- common/autotest_common.sh@1541 -- # continue 00:05:52.124 00:13:22 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:52.124 00:13:22 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:52.124 00:13:22 -- common/autotest_common.sh@10 -- # set +x 00:05:52.124 00:13:22 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:52.124 00:13:22 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:52.124 00:13:22 -- common/autotest_common.sh@10 -- # set +x 00:05:52.124 00:13:22 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:05:55.420 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:55.420 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:55.420 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:55.420 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:55.420 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:55.420 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:55.420 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:55.420 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:55.420 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:55.420 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:55.420 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:55.420 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:55.420 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:55.420 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:55.420 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:55.420 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:58.712 0000:1a:00.0 (8086 0a54): nvme -> vfio-pci 00:06:00.619 00:13:31 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:06:00.619 00:13:31 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:00.619 00:13:31 -- common/autotest_common.sh@10 -- # set +x 00:06:00.619 00:13:31 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:06:00.619 00:13:31 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:06:00.619 00:13:31 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:06:00.619 00:13:31 -- common/autotest_common.sh@1561 -- # bdfs=() 00:06:00.619 00:13:31 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:06:00.619 00:13:31 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:06:00.619 00:13:31 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:06:00.619 00:13:31 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:06:00.619 00:13:31 -- common/autotest_common.sh@1496 -- # bdfs=() 00:06:00.619 00:13:31 -- common/autotest_common.sh@1496 -- # local bdfs 00:06:00.619 00:13:31 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:00.619 00:13:31 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:06:00.619 00:13:31 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:06:00.619 00:13:31 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:06:00.619 00:13:31 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:1a:00.0 00:06:00.619 00:13:31 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:06:00.619 00:13:31 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:1a:00.0/device 00:06:00.619 00:13:31 -- common/autotest_common.sh@1564 -- # device=0x0a54 00:06:00.619 00:13:31 -- common/autotest_common.sh@1565 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:06:00.619 00:13:31 -- common/autotest_common.sh@1566 -- # bdfs+=($bdf) 00:06:00.619 00:13:31 -- common/autotest_common.sh@1570 -- # (( 1 > 0 )) 00:06:00.619 00:13:31 -- common/autotest_common.sh@1571 -- # printf '%s\n' 0000:1a:00.0 00:06:00.619 00:13:31 -- common/autotest_common.sh@1577 -- # [[ -z 0000:1a:00.0 ]] 00:06:00.619 00:13:31 -- common/autotest_common.sh@1582 -- # spdk_tgt_pid=3870364 00:06:00.619 00:13:31 -- common/autotest_common.sh@1581 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:00.620 00:13:31 -- common/autotest_common.sh@1583 -- # waitforlisten 3870364 00:06:00.620 00:13:31 -- common/autotest_common.sh@831 -- # '[' -z 3870364 ']' 00:06:00.620 00:13:31 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.620 00:13:31 -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:00.620 00:13:31 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.620 00:13:31 -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:00.620 00:13:31 -- common/autotest_common.sh@10 -- # set +x 00:06:00.620 [2024-10-09 00:13:31.245216] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:06:00.620 [2024-10-09 00:13:31.245286] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3870364 ] 00:06:00.879 [2024-10-09 00:13:31.320504] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.879 [2024-10-09 00:13:31.407218] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.819 00:13:32 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:01.819 00:13:32 -- common/autotest_common.sh@864 -- # return 0 00:06:01.819 00:13:32 -- common/autotest_common.sh@1585 -- # bdf_id=0 00:06:01.819 00:13:32 -- common/autotest_common.sh@1586 -- # for bdf in "${bdfs[@]}" 00:06:01.819 00:13:32 -- common/autotest_common.sh@1587 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:1a:00.0 00:06:05.107 nvme0n1 00:06:05.107 00:13:35 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:06:05.107 [2024-10-09 00:13:35.308170] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:06:05.107 request: 00:06:05.107 { 00:06:05.107 "nvme_ctrlr_name": "nvme0", 00:06:05.107 "password": "test", 00:06:05.107 "method": "bdev_nvme_opal_revert", 00:06:05.107 "req_id": 1 00:06:05.107 } 00:06:05.107 Got JSON-RPC error response 00:06:05.107 response: 00:06:05.107 { 00:06:05.107 "code": -32602, 00:06:05.107 "message": "Invalid parameters" 00:06:05.107 } 00:06:05.107 00:13:35 -- common/autotest_common.sh@1589 -- # true 00:06:05.107 00:13:35 -- common/autotest_common.sh@1590 -- # (( ++bdf_id )) 00:06:05.107 00:13:35 -- common/autotest_common.sh@1593 -- # killprocess 3870364 00:06:05.107 00:13:35 -- common/autotest_common.sh@950 -- # '[' -z 3870364 ']' 00:06:05.107 00:13:35 -- common/autotest_common.sh@954 -- # kill -0 3870364 00:06:05.107 00:13:35 -- common/autotest_common.sh@955 -- # uname 00:06:05.107 00:13:35 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:05.107 00:13:35 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3870364 00:06:05.107 00:13:35 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:05.107 00:13:35 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:05.107 00:13:35 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3870364' 00:06:05.107 killing process with pid 3870364 00:06:05.107 00:13:35 -- common/autotest_common.sh@969 -- # kill 3870364 00:06:05.107 00:13:35 -- common/autotest_common.sh@974 -- # wait 3870364 00:06:09.316 00:13:39 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:06:09.316 00:13:39 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:06:09.316 00:13:39 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:09.316 00:13:39 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:09.316 00:13:39 -- spdk/autotest.sh@149 -- # timing_enter lib 00:06:09.316 00:13:39 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:09.316 00:13:39 -- common/autotest_common.sh@10 -- # set +x 00:06:09.316 00:13:39 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:06:09.316 00:13:39 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:06:09.316 00:13:39 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:09.316 00:13:39 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:09.316 00:13:39 -- common/autotest_common.sh@10 -- # set +x 00:06:09.316 ************************************ 00:06:09.316 START TEST env 00:06:09.316 ************************************ 00:06:09.316 00:13:39 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:06:09.316 * Looking for test storage... 00:06:09.316 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env 00:06:09.316 00:13:39 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:09.316 00:13:39 env -- common/autotest_common.sh@1681 -- # lcov --version 00:06:09.316 00:13:39 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:09.316 00:13:39 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:09.316 00:13:39 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:09.316 00:13:39 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:09.316 00:13:39 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:09.316 00:13:39 env -- scripts/common.sh@336 -- # IFS=.-: 00:06:09.316 00:13:39 env -- scripts/common.sh@336 -- # read -ra ver1 00:06:09.316 00:13:39 env -- scripts/common.sh@337 -- # IFS=.-: 00:06:09.316 00:13:39 env -- scripts/common.sh@337 -- # read -ra ver2 00:06:09.316 00:13:39 env -- scripts/common.sh@338 -- # local 'op=<' 00:06:09.316 00:13:39 env -- scripts/common.sh@340 -- # ver1_l=2 00:06:09.316 00:13:39 env -- scripts/common.sh@341 -- # ver2_l=1 00:06:09.316 00:13:39 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:09.316 00:13:39 env -- scripts/common.sh@344 -- # case "$op" in 00:06:09.316 00:13:39 env -- scripts/common.sh@345 -- # : 1 00:06:09.316 00:13:39 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:09.316 00:13:39 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:09.316 00:13:39 env -- scripts/common.sh@365 -- # decimal 1 00:06:09.316 00:13:39 env -- scripts/common.sh@353 -- # local d=1 00:06:09.316 00:13:39 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:09.316 00:13:39 env -- scripts/common.sh@355 -- # echo 1 00:06:09.316 00:13:39 env -- scripts/common.sh@365 -- # ver1[v]=1 00:06:09.316 00:13:39 env -- scripts/common.sh@366 -- # decimal 2 00:06:09.316 00:13:39 env -- scripts/common.sh@353 -- # local d=2 00:06:09.316 00:13:39 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:09.316 00:13:39 env -- scripts/common.sh@355 -- # echo 2 00:06:09.316 00:13:39 env -- scripts/common.sh@366 -- # ver2[v]=2 00:06:09.316 00:13:39 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:09.316 00:13:39 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:09.316 00:13:39 env -- scripts/common.sh@368 -- # return 0 00:06:09.316 00:13:39 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:09.316 00:13:39 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:09.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.316 --rc genhtml_branch_coverage=1 00:06:09.316 --rc genhtml_function_coverage=1 00:06:09.316 --rc genhtml_legend=1 00:06:09.316 --rc geninfo_all_blocks=1 00:06:09.316 --rc geninfo_unexecuted_blocks=1 00:06:09.316 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:09.316 ' 00:06:09.316 00:13:39 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:09.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.316 --rc genhtml_branch_coverage=1 00:06:09.316 --rc genhtml_function_coverage=1 00:06:09.316 --rc genhtml_legend=1 00:06:09.316 --rc geninfo_all_blocks=1 00:06:09.316 --rc geninfo_unexecuted_blocks=1 00:06:09.316 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:09.316 ' 00:06:09.316 00:13:39 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:09.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.316 --rc genhtml_branch_coverage=1 00:06:09.316 --rc genhtml_function_coverage=1 00:06:09.316 --rc genhtml_legend=1 00:06:09.316 --rc geninfo_all_blocks=1 00:06:09.316 --rc geninfo_unexecuted_blocks=1 00:06:09.316 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:09.316 ' 00:06:09.316 00:13:39 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:09.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.316 --rc genhtml_branch_coverage=1 00:06:09.316 --rc genhtml_function_coverage=1 00:06:09.316 --rc genhtml_legend=1 00:06:09.316 --rc geninfo_all_blocks=1 00:06:09.316 --rc geninfo_unexecuted_blocks=1 00:06:09.316 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:09.316 ' 00:06:09.316 00:13:39 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:06:09.316 00:13:39 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:09.316 00:13:39 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:09.316 00:13:39 env -- common/autotest_common.sh@10 -- # set +x 00:06:09.316 ************************************ 00:06:09.316 START TEST env_memory 00:06:09.316 ************************************ 00:06:09.316 00:13:39 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:06:09.316 00:06:09.316 00:06:09.316 CUnit - A unit testing framework for C - Version 2.1-3 00:06:09.316 http://cunit.sourceforge.net/ 00:06:09.316 00:06:09.316 00:06:09.316 Suite: memory 00:06:09.316 Test: alloc and free memory map ...[2024-10-09 00:13:39.707548] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:09.316 passed 00:06:09.316 Test: mem map translation ...[2024-10-09 00:13:39.721473] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 596:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:09.316 [2024-10-09 00:13:39.721493] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 596:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:09.316 [2024-10-09 00:13:39.721528] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:09.316 [2024-10-09 00:13:39.721537] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:09.316 passed 00:06:09.316 Test: mem map registration ...[2024-10-09 00:13:39.742573] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 348:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:06:09.316 [2024-10-09 00:13:39.742593] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 348:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:06:09.316 passed 00:06:09.316 Test: mem map adjacent registrations ...passed 00:06:09.316 00:06:09.316 Run Summary: Type Total Ran Passed Failed Inactive 00:06:09.316 suites 1 1 n/a 0 0 00:06:09.316 tests 4 4 4 0 0 00:06:09.316 asserts 152 152 152 0 n/a 00:06:09.316 00:06:09.316 Elapsed time = 0.086 seconds 00:06:09.316 00:06:09.316 real 0m0.098s 00:06:09.316 user 0m0.085s 00:06:09.316 sys 0m0.012s 00:06:09.316 00:13:39 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:09.316 00:13:39 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:09.316 ************************************ 00:06:09.316 END TEST env_memory 00:06:09.316 ************************************ 00:06:09.316 00:13:39 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:09.316 00:13:39 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:09.316 00:13:39 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:09.316 00:13:39 env -- common/autotest_common.sh@10 -- # set +x 00:06:09.316 ************************************ 00:06:09.316 START TEST env_vtophys 00:06:09.316 ************************************ 00:06:09.316 00:13:39 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:09.316 EAL: lib.eal log level changed from notice to debug 00:06:09.316 EAL: Detected lcore 0 as core 0 on socket 0 00:06:09.316 EAL: Detected lcore 1 as core 1 on socket 0 00:06:09.316 EAL: Detected lcore 2 as core 2 on socket 0 00:06:09.316 EAL: Detected lcore 3 as core 3 on socket 0 00:06:09.316 EAL: Detected lcore 4 as core 4 on socket 0 00:06:09.316 EAL: Detected lcore 5 as core 8 on socket 0 00:06:09.316 EAL: Detected lcore 6 as core 9 on socket 0 00:06:09.316 EAL: Detected lcore 7 as core 10 on socket 0 00:06:09.316 EAL: Detected lcore 8 as core 11 on socket 0 00:06:09.316 EAL: Detected lcore 9 as core 16 on socket 0 00:06:09.316 EAL: Detected lcore 10 as core 17 on socket 0 00:06:09.316 EAL: Detected lcore 11 as core 18 on socket 0 00:06:09.316 EAL: Detected lcore 12 as core 19 on socket 0 00:06:09.316 EAL: Detected lcore 13 as core 20 on socket 0 00:06:09.316 EAL: Detected lcore 14 as core 24 on socket 0 00:06:09.316 EAL: Detected lcore 15 as core 25 on socket 0 00:06:09.316 EAL: Detected lcore 16 as core 26 on socket 0 00:06:09.317 EAL: Detected lcore 17 as core 27 on socket 0 00:06:09.317 EAL: Detected lcore 18 as core 0 on socket 1 00:06:09.317 EAL: Detected lcore 19 as core 1 on socket 1 00:06:09.317 EAL: Detected lcore 20 as core 2 on socket 1 00:06:09.317 EAL: Detected lcore 21 as core 3 on socket 1 00:06:09.317 EAL: Detected lcore 22 as core 4 on socket 1 00:06:09.317 EAL: Detected lcore 23 as core 8 on socket 1 00:06:09.317 EAL: Detected lcore 24 as core 9 on socket 1 00:06:09.317 EAL: Detected lcore 25 as core 10 on socket 1 00:06:09.317 EAL: Detected lcore 26 as core 11 on socket 1 00:06:09.317 EAL: Detected lcore 27 as core 16 on socket 1 00:06:09.317 EAL: Detected lcore 28 as core 17 on socket 1 00:06:09.317 EAL: Detected lcore 29 as core 18 on socket 1 00:06:09.317 EAL: Detected lcore 30 as core 19 on socket 1 00:06:09.317 EAL: Detected lcore 31 as core 20 on socket 1 00:06:09.317 EAL: Detected lcore 32 as core 24 on socket 1 00:06:09.317 EAL: Detected lcore 33 as core 25 on socket 1 00:06:09.317 EAL: Detected lcore 34 as core 26 on socket 1 00:06:09.317 EAL: Detected lcore 35 as core 27 on socket 1 00:06:09.317 EAL: Detected lcore 36 as core 0 on socket 0 00:06:09.317 EAL: Detected lcore 37 as core 1 on socket 0 00:06:09.317 EAL: Detected lcore 38 as core 2 on socket 0 00:06:09.317 EAL: Detected lcore 39 as core 3 on socket 0 00:06:09.317 EAL: Detected lcore 40 as core 4 on socket 0 00:06:09.317 EAL: Detected lcore 41 as core 8 on socket 0 00:06:09.317 EAL: Detected lcore 42 as core 9 on socket 0 00:06:09.317 EAL: Detected lcore 43 as core 10 on socket 0 00:06:09.317 EAL: Detected lcore 44 as core 11 on socket 0 00:06:09.317 EAL: Detected lcore 45 as core 16 on socket 0 00:06:09.317 EAL: Detected lcore 46 as core 17 on socket 0 00:06:09.317 EAL: Detected lcore 47 as core 18 on socket 0 00:06:09.317 EAL: Detected lcore 48 as core 19 on socket 0 00:06:09.317 EAL: Detected lcore 49 as core 20 on socket 0 00:06:09.317 EAL: Detected lcore 50 as core 24 on socket 0 00:06:09.317 EAL: Detected lcore 51 as core 25 on socket 0 00:06:09.317 EAL: Detected lcore 52 as core 26 on socket 0 00:06:09.317 EAL: Detected lcore 53 as core 27 on socket 0 00:06:09.317 EAL: Detected lcore 54 as core 0 on socket 1 00:06:09.317 EAL: Detected lcore 55 as core 1 on socket 1 00:06:09.317 EAL: Detected lcore 56 as core 2 on socket 1 00:06:09.317 EAL: Detected lcore 57 as core 3 on socket 1 00:06:09.317 EAL: Detected lcore 58 as core 4 on socket 1 00:06:09.317 EAL: Detected lcore 59 as core 8 on socket 1 00:06:09.317 EAL: Detected lcore 60 as core 9 on socket 1 00:06:09.317 EAL: Detected lcore 61 as core 10 on socket 1 00:06:09.317 EAL: Detected lcore 62 as core 11 on socket 1 00:06:09.317 EAL: Detected lcore 63 as core 16 on socket 1 00:06:09.317 EAL: Detected lcore 64 as core 17 on socket 1 00:06:09.317 EAL: Detected lcore 65 as core 18 on socket 1 00:06:09.317 EAL: Detected lcore 66 as core 19 on socket 1 00:06:09.317 EAL: Detected lcore 67 as core 20 on socket 1 00:06:09.317 EAL: Detected lcore 68 as core 24 on socket 1 00:06:09.317 EAL: Detected lcore 69 as core 25 on socket 1 00:06:09.317 EAL: Detected lcore 70 as core 26 on socket 1 00:06:09.317 EAL: Detected lcore 71 as core 27 on socket 1 00:06:09.317 EAL: Maximum logical cores by configuration: 128 00:06:09.317 EAL: Detected CPU lcores: 72 00:06:09.317 EAL: Detected NUMA nodes: 2 00:06:09.317 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:06:09.317 EAL: Checking presence of .so 'librte_eal.so.24' 00:06:09.317 EAL: Checking presence of .so 'librte_eal.so' 00:06:09.317 EAL: Detected static linkage of DPDK 00:06:09.317 EAL: No shared files mode enabled, IPC will be disabled 00:06:09.317 EAL: Bus pci wants IOVA as 'DC' 00:06:09.317 EAL: Buses did not request a specific IOVA mode. 00:06:09.317 EAL: IOMMU is available, selecting IOVA as VA mode. 00:06:09.317 EAL: Selected IOVA mode 'VA' 00:06:09.317 EAL: Probing VFIO support... 00:06:09.317 EAL: IOMMU type 1 (Type 1) is supported 00:06:09.317 EAL: IOMMU type 7 (sPAPR) is not supported 00:06:09.317 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:06:09.317 EAL: VFIO support initialized 00:06:09.317 EAL: Ask a virtual area of 0x2e000 bytes 00:06:09.317 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:09.317 EAL: Setting up physically contiguous memory... 00:06:09.317 EAL: Setting maximum number of open files to 524288 00:06:09.317 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:09.317 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:06:09.317 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:09.317 EAL: Ask a virtual area of 0x61000 bytes 00:06:09.317 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:09.317 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:09.317 EAL: Ask a virtual area of 0x400000000 bytes 00:06:09.317 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:09.317 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:09.317 EAL: Ask a virtual area of 0x61000 bytes 00:06:09.317 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:09.317 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:09.317 EAL: Ask a virtual area of 0x400000000 bytes 00:06:09.317 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:09.317 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:09.317 EAL: Ask a virtual area of 0x61000 bytes 00:06:09.317 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:09.317 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:09.317 EAL: Ask a virtual area of 0x400000000 bytes 00:06:09.317 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:09.317 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:09.317 EAL: Ask a virtual area of 0x61000 bytes 00:06:09.317 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:09.317 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:09.317 EAL: Ask a virtual area of 0x400000000 bytes 00:06:09.317 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:09.317 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:09.317 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:06:09.317 EAL: Ask a virtual area of 0x61000 bytes 00:06:09.317 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:06:09.317 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:09.317 EAL: Ask a virtual area of 0x400000000 bytes 00:06:09.317 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:06:09.317 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:06:09.317 EAL: Ask a virtual area of 0x61000 bytes 00:06:09.317 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:06:09.317 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:09.317 EAL: Ask a virtual area of 0x400000000 bytes 00:06:09.317 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:06:09.317 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:06:09.317 EAL: Ask a virtual area of 0x61000 bytes 00:06:09.317 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:06:09.317 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:09.317 EAL: Ask a virtual area of 0x400000000 bytes 00:06:09.317 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:06:09.317 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:06:09.317 EAL: Ask a virtual area of 0x61000 bytes 00:06:09.317 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:06:09.317 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:09.317 EAL: Ask a virtual area of 0x400000000 bytes 00:06:09.317 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:06:09.317 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:06:09.317 EAL: Hugepages will be freed exactly as allocated. 00:06:09.317 EAL: No shared files mode enabled, IPC is disabled 00:06:09.317 EAL: No shared files mode enabled, IPC is disabled 00:06:09.317 EAL: TSC frequency is ~2300000 KHz 00:06:09.317 EAL: Main lcore 0 is ready (tid=7fbc7cf4ea00;cpuset=[0]) 00:06:09.317 EAL: Trying to obtain current memory policy. 00:06:09.317 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:09.317 EAL: Restoring previous memory policy: 0 00:06:09.317 EAL: request: mp_malloc_sync 00:06:09.317 EAL: No shared files mode enabled, IPC is disabled 00:06:09.317 EAL: Heap on socket 0 was expanded by 2MB 00:06:09.317 EAL: No shared files mode enabled, IPC is disabled 00:06:09.317 EAL: Mem event callback 'spdk:(nil)' registered 00:06:09.317 00:06:09.317 00:06:09.317 CUnit - A unit testing framework for C - Version 2.1-3 00:06:09.317 http://cunit.sourceforge.net/ 00:06:09.317 00:06:09.317 00:06:09.317 Suite: components_suite 00:06:09.317 Test: vtophys_malloc_test ...passed 00:06:09.576 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:09.576 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:09.576 EAL: Restoring previous memory policy: 4 00:06:09.576 EAL: Calling mem event callback 'spdk:(nil)' 00:06:09.576 EAL: request: mp_malloc_sync 00:06:09.576 EAL: No shared files mode enabled, IPC is disabled 00:06:09.576 EAL: Heap on socket 0 was expanded by 4MB 00:06:09.576 EAL: Calling mem event callback 'spdk:(nil)' 00:06:09.576 EAL: request: mp_malloc_sync 00:06:09.576 EAL: No shared files mode enabled, IPC is disabled 00:06:09.576 EAL: Heap on socket 0 was shrunk by 4MB 00:06:09.576 EAL: Trying to obtain current memory policy. 00:06:09.576 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:09.576 EAL: Restoring previous memory policy: 4 00:06:09.576 EAL: Calling mem event callback 'spdk:(nil)' 00:06:09.576 EAL: request: mp_malloc_sync 00:06:09.576 EAL: No shared files mode enabled, IPC is disabled 00:06:09.576 EAL: Heap on socket 0 was expanded by 6MB 00:06:09.576 EAL: Calling mem event callback 'spdk:(nil)' 00:06:09.576 EAL: request: mp_malloc_sync 00:06:09.576 EAL: No shared files mode enabled, IPC is disabled 00:06:09.576 EAL: Heap on socket 0 was shrunk by 6MB 00:06:09.576 EAL: Trying to obtain current memory policy. 00:06:09.576 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:09.576 EAL: Restoring previous memory policy: 4 00:06:09.576 EAL: Calling mem event callback 'spdk:(nil)' 00:06:09.576 EAL: request: mp_malloc_sync 00:06:09.576 EAL: No shared files mode enabled, IPC is disabled 00:06:09.576 EAL: Heap on socket 0 was expanded by 10MB 00:06:09.576 EAL: Calling mem event callback 'spdk:(nil)' 00:06:09.576 EAL: request: mp_malloc_sync 00:06:09.577 EAL: No shared files mode enabled, IPC is disabled 00:06:09.577 EAL: Heap on socket 0 was shrunk by 10MB 00:06:09.577 EAL: Trying to obtain current memory policy. 00:06:09.577 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:09.577 EAL: Restoring previous memory policy: 4 00:06:09.577 EAL: Calling mem event callback 'spdk:(nil)' 00:06:09.577 EAL: request: mp_malloc_sync 00:06:09.577 EAL: No shared files mode enabled, IPC is disabled 00:06:09.577 EAL: Heap on socket 0 was expanded by 18MB 00:06:09.577 EAL: Calling mem event callback 'spdk:(nil)' 00:06:09.577 EAL: request: mp_malloc_sync 00:06:09.577 EAL: No shared files mode enabled, IPC is disabled 00:06:09.577 EAL: Heap on socket 0 was shrunk by 18MB 00:06:09.577 EAL: Trying to obtain current memory policy. 00:06:09.577 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:09.577 EAL: Restoring previous memory policy: 4 00:06:09.577 EAL: Calling mem event callback 'spdk:(nil)' 00:06:09.577 EAL: request: mp_malloc_sync 00:06:09.577 EAL: No shared files mode enabled, IPC is disabled 00:06:09.577 EAL: Heap on socket 0 was expanded by 34MB 00:06:09.577 EAL: Calling mem event callback 'spdk:(nil)' 00:06:09.577 EAL: request: mp_malloc_sync 00:06:09.577 EAL: No shared files mode enabled, IPC is disabled 00:06:09.577 EAL: Heap on socket 0 was shrunk by 34MB 00:06:09.577 EAL: Trying to obtain current memory policy. 00:06:09.577 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:09.577 EAL: Restoring previous memory policy: 4 00:06:09.577 EAL: Calling mem event callback 'spdk:(nil)' 00:06:09.577 EAL: request: mp_malloc_sync 00:06:09.577 EAL: No shared files mode enabled, IPC is disabled 00:06:09.577 EAL: Heap on socket 0 was expanded by 66MB 00:06:09.577 EAL: Calling mem event callback 'spdk:(nil)' 00:06:09.577 EAL: request: mp_malloc_sync 00:06:09.577 EAL: No shared files mode enabled, IPC is disabled 00:06:09.577 EAL: Heap on socket 0 was shrunk by 66MB 00:06:09.577 EAL: Trying to obtain current memory policy. 00:06:09.577 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:09.577 EAL: Restoring previous memory policy: 4 00:06:09.577 EAL: Calling mem event callback 'spdk:(nil)' 00:06:09.577 EAL: request: mp_malloc_sync 00:06:09.577 EAL: No shared files mode enabled, IPC is disabled 00:06:09.577 EAL: Heap on socket 0 was expanded by 130MB 00:06:09.577 EAL: Calling mem event callback 'spdk:(nil)' 00:06:09.577 EAL: request: mp_malloc_sync 00:06:09.577 EAL: No shared files mode enabled, IPC is disabled 00:06:09.577 EAL: Heap on socket 0 was shrunk by 130MB 00:06:09.577 EAL: Trying to obtain current memory policy. 00:06:09.577 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:09.577 EAL: Restoring previous memory policy: 4 00:06:09.577 EAL: Calling mem event callback 'spdk:(nil)' 00:06:09.577 EAL: request: mp_malloc_sync 00:06:09.577 EAL: No shared files mode enabled, IPC is disabled 00:06:09.577 EAL: Heap on socket 0 was expanded by 258MB 00:06:09.577 EAL: Calling mem event callback 'spdk:(nil)' 00:06:09.836 EAL: request: mp_malloc_sync 00:06:09.836 EAL: No shared files mode enabled, IPC is disabled 00:06:09.836 EAL: Heap on socket 0 was shrunk by 258MB 00:06:09.836 EAL: Trying to obtain current memory policy. 00:06:09.836 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:09.836 EAL: Restoring previous memory policy: 4 00:06:09.836 EAL: Calling mem event callback 'spdk:(nil)' 00:06:09.836 EAL: request: mp_malloc_sync 00:06:09.836 EAL: No shared files mode enabled, IPC is disabled 00:06:09.836 EAL: Heap on socket 0 was expanded by 514MB 00:06:09.836 EAL: Calling mem event callback 'spdk:(nil)' 00:06:10.095 EAL: request: mp_malloc_sync 00:06:10.095 EAL: No shared files mode enabled, IPC is disabled 00:06:10.095 EAL: Heap on socket 0 was shrunk by 514MB 00:06:10.095 EAL: Trying to obtain current memory policy. 00:06:10.095 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:10.355 EAL: Restoring previous memory policy: 4 00:06:10.355 EAL: Calling mem event callback 'spdk:(nil)' 00:06:10.355 EAL: request: mp_malloc_sync 00:06:10.355 EAL: No shared files mode enabled, IPC is disabled 00:06:10.355 EAL: Heap on socket 0 was expanded by 1026MB 00:06:10.355 EAL: Calling mem event callback 'spdk:(nil)' 00:06:10.614 EAL: request: mp_malloc_sync 00:06:10.614 EAL: No shared files mode enabled, IPC is disabled 00:06:10.614 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:10.614 passed 00:06:10.614 00:06:10.614 Run Summary: Type Total Ran Passed Failed Inactive 00:06:10.614 suites 1 1 n/a 0 0 00:06:10.614 tests 2 2 2 0 0 00:06:10.614 asserts 497 497 497 0 n/a 00:06:10.614 00:06:10.614 Elapsed time = 1.129 seconds 00:06:10.614 EAL: Calling mem event callback 'spdk:(nil)' 00:06:10.614 EAL: request: mp_malloc_sync 00:06:10.614 EAL: No shared files mode enabled, IPC is disabled 00:06:10.614 EAL: Heap on socket 0 was shrunk by 2MB 00:06:10.614 EAL: No shared files mode enabled, IPC is disabled 00:06:10.614 EAL: No shared files mode enabled, IPC is disabled 00:06:10.614 EAL: No shared files mode enabled, IPC is disabled 00:06:10.614 00:06:10.614 real 0m1.261s 00:06:10.614 user 0m0.731s 00:06:10.614 sys 0m0.500s 00:06:10.614 00:13:41 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:10.614 00:13:41 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:10.614 ************************************ 00:06:10.614 END TEST env_vtophys 00:06:10.614 ************************************ 00:06:10.614 00:13:41 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:06:10.614 00:13:41 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:10.614 00:13:41 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:10.614 00:13:41 env -- common/autotest_common.sh@10 -- # set +x 00:06:10.614 ************************************ 00:06:10.614 START TEST env_pci 00:06:10.614 ************************************ 00:06:10.614 00:13:41 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:06:10.614 00:06:10.614 00:06:10.614 CUnit - A unit testing framework for C - Version 2.1-3 00:06:10.614 http://cunit.sourceforge.net/ 00:06:10.614 00:06:10.614 00:06:10.614 Suite: pci 00:06:10.614 Test: pci_hook ...[2024-10-09 00:13:41.189633] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/pci.c:1050:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3871810 has claimed it 00:06:10.614 EAL: Cannot find device (10000:00:01.0) 00:06:10.614 EAL: Failed to attach device on primary process 00:06:10.614 passed 00:06:10.614 00:06:10.614 Run Summary: Type Total Ran Passed Failed Inactive 00:06:10.614 suites 1 1 n/a 0 0 00:06:10.614 tests 1 1 1 0 0 00:06:10.614 asserts 25 25 25 0 n/a 00:06:10.614 00:06:10.614 Elapsed time = 0.027 seconds 00:06:10.614 00:06:10.614 real 0m0.037s 00:06:10.614 user 0m0.007s 00:06:10.614 sys 0m0.030s 00:06:10.614 00:13:41 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:10.614 00:13:41 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:10.614 ************************************ 00:06:10.614 END TEST env_pci 00:06:10.614 ************************************ 00:06:10.874 00:13:41 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:10.874 00:13:41 env -- env/env.sh@15 -- # uname 00:06:10.874 00:13:41 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:10.874 00:13:41 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:10.874 00:13:41 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:10.874 00:13:41 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:06:10.874 00:13:41 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:10.874 00:13:41 env -- common/autotest_common.sh@10 -- # set +x 00:06:10.874 ************************************ 00:06:10.874 START TEST env_dpdk_post_init 00:06:10.874 ************************************ 00:06:10.874 00:13:41 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:10.874 EAL: Detected CPU lcores: 72 00:06:10.874 EAL: Detected NUMA nodes: 2 00:06:10.874 EAL: Detected static linkage of DPDK 00:06:10.874 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:10.874 EAL: Selected IOVA mode 'VA' 00:06:10.874 EAL: VFIO support initialized 00:06:10.874 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:10.874 EAL: Using IOMMU type 1 (Type 1) 00:06:11.812 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:1a:00.0 (socket 0) 00:06:17.087 EAL: Releasing PCI mapped resource for 0000:1a:00.0 00:06:17.087 EAL: Calling pci_unmap_resource for 0000:1a:00.0 at 0x202001000000 00:06:17.346 Starting DPDK initialization... 00:06:17.346 Starting SPDK post initialization... 00:06:17.346 SPDK NVMe probe 00:06:17.346 Attaching to 0000:1a:00.0 00:06:17.346 Attached to 0000:1a:00.0 00:06:17.346 Cleaning up... 00:06:17.346 00:06:17.346 real 0m6.526s 00:06:17.346 user 0m4.746s 00:06:17.346 sys 0m1.031s 00:06:17.346 00:13:47 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:17.346 00:13:47 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:17.346 ************************************ 00:06:17.346 END TEST env_dpdk_post_init 00:06:17.346 ************************************ 00:06:17.346 00:13:47 env -- env/env.sh@26 -- # uname 00:06:17.346 00:13:47 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:17.346 00:13:47 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:17.346 00:13:47 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:17.346 00:13:47 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:17.346 00:13:47 env -- common/autotest_common.sh@10 -- # set +x 00:06:17.346 ************************************ 00:06:17.346 START TEST env_mem_callbacks 00:06:17.346 ************************************ 00:06:17.346 00:13:47 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:17.346 EAL: Detected CPU lcores: 72 00:06:17.346 EAL: Detected NUMA nodes: 2 00:06:17.346 EAL: Detected static linkage of DPDK 00:06:17.346 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:17.346 EAL: Selected IOVA mode 'VA' 00:06:17.346 EAL: VFIO support initialized 00:06:17.346 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:17.346 00:06:17.347 00:06:17.347 CUnit - A unit testing framework for C - Version 2.1-3 00:06:17.347 http://cunit.sourceforge.net/ 00:06:17.347 00:06:17.347 00:06:17.347 Suite: memory 00:06:17.347 Test: test ... 00:06:17.347 register 0x200000200000 2097152 00:06:17.347 malloc 3145728 00:06:17.347 register 0x200000400000 4194304 00:06:17.347 buf 0x200000500000 len 3145728 PASSED 00:06:17.347 malloc 64 00:06:17.347 buf 0x2000004fff40 len 64 PASSED 00:06:17.347 malloc 4194304 00:06:17.347 register 0x200000800000 6291456 00:06:17.347 buf 0x200000a00000 len 4194304 PASSED 00:06:17.347 free 0x200000500000 3145728 00:06:17.347 free 0x2000004fff40 64 00:06:17.347 unregister 0x200000400000 4194304 PASSED 00:06:17.347 free 0x200000a00000 4194304 00:06:17.347 unregister 0x200000800000 6291456 PASSED 00:06:17.347 malloc 8388608 00:06:17.347 register 0x200000400000 10485760 00:06:17.347 buf 0x200000600000 len 8388608 PASSED 00:06:17.347 free 0x200000600000 8388608 00:06:17.347 unregister 0x200000400000 10485760 PASSED 00:06:17.347 passed 00:06:17.347 00:06:17.347 Run Summary: Type Total Ran Passed Failed Inactive 00:06:17.347 suites 1 1 n/a 0 0 00:06:17.347 tests 1 1 1 0 0 00:06:17.347 asserts 15 15 15 0 n/a 00:06:17.347 00:06:17.347 Elapsed time = 0.005 seconds 00:06:17.347 00:06:17.347 real 0m0.066s 00:06:17.347 user 0m0.021s 00:06:17.347 sys 0m0.045s 00:06:17.347 00:13:47 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:17.347 00:13:47 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:17.347 ************************************ 00:06:17.347 END TEST env_mem_callbacks 00:06:17.347 ************************************ 00:06:17.606 00:06:17.606 real 0m8.566s 00:06:17.606 user 0m5.849s 00:06:17.606 sys 0m1.980s 00:06:17.606 00:13:48 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:17.606 00:13:48 env -- common/autotest_common.sh@10 -- # set +x 00:06:17.606 ************************************ 00:06:17.606 END TEST env 00:06:17.606 ************************************ 00:06:17.606 00:13:48 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:06:17.606 00:13:48 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:17.606 00:13:48 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:17.606 00:13:48 -- common/autotest_common.sh@10 -- # set +x 00:06:17.606 ************************************ 00:06:17.606 START TEST rpc 00:06:17.606 ************************************ 00:06:17.606 00:13:48 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:06:17.606 * Looking for test storage... 00:06:17.606 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:06:17.606 00:13:48 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:17.606 00:13:48 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:06:17.606 00:13:48 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:17.606 00:13:48 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:17.606 00:13:48 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:17.606 00:13:48 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:17.606 00:13:48 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:17.606 00:13:48 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:17.606 00:13:48 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:17.606 00:13:48 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:17.606 00:13:48 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:17.606 00:13:48 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:17.606 00:13:48 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:17.606 00:13:48 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:17.606 00:13:48 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:17.606 00:13:48 rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:17.606 00:13:48 rpc -- scripts/common.sh@345 -- # : 1 00:06:17.606 00:13:48 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:17.606 00:13:48 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:17.606 00:13:48 rpc -- scripts/common.sh@365 -- # decimal 1 00:06:17.606 00:13:48 rpc -- scripts/common.sh@353 -- # local d=1 00:06:17.606 00:13:48 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:17.606 00:13:48 rpc -- scripts/common.sh@355 -- # echo 1 00:06:17.865 00:13:48 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:17.865 00:13:48 rpc -- scripts/common.sh@366 -- # decimal 2 00:06:17.865 00:13:48 rpc -- scripts/common.sh@353 -- # local d=2 00:06:17.865 00:13:48 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:17.865 00:13:48 rpc -- scripts/common.sh@355 -- # echo 2 00:06:17.865 00:13:48 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:17.865 00:13:48 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:17.865 00:13:48 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:17.865 00:13:48 rpc -- scripts/common.sh@368 -- # return 0 00:06:17.865 00:13:48 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:17.865 00:13:48 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:17.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.865 --rc genhtml_branch_coverage=1 00:06:17.865 --rc genhtml_function_coverage=1 00:06:17.865 --rc genhtml_legend=1 00:06:17.865 --rc geninfo_all_blocks=1 00:06:17.865 --rc geninfo_unexecuted_blocks=1 00:06:17.865 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:17.865 ' 00:06:17.865 00:13:48 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:17.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.865 --rc genhtml_branch_coverage=1 00:06:17.865 --rc genhtml_function_coverage=1 00:06:17.865 --rc genhtml_legend=1 00:06:17.865 --rc geninfo_all_blocks=1 00:06:17.865 --rc geninfo_unexecuted_blocks=1 00:06:17.865 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:17.865 ' 00:06:17.865 00:13:48 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:17.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.865 --rc genhtml_branch_coverage=1 00:06:17.865 --rc genhtml_function_coverage=1 00:06:17.865 --rc genhtml_legend=1 00:06:17.865 --rc geninfo_all_blocks=1 00:06:17.865 --rc geninfo_unexecuted_blocks=1 00:06:17.865 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:17.865 ' 00:06:17.865 00:13:48 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:17.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.865 --rc genhtml_branch_coverage=1 00:06:17.865 --rc genhtml_function_coverage=1 00:06:17.865 --rc genhtml_legend=1 00:06:17.865 --rc geninfo_all_blocks=1 00:06:17.865 --rc geninfo_unexecuted_blocks=1 00:06:17.865 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:17.865 ' 00:06:17.865 00:13:48 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3872854 00:06:17.865 00:13:48 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:17.865 00:13:48 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:06:17.865 00:13:48 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3872854 00:06:17.865 00:13:48 rpc -- common/autotest_common.sh@831 -- # '[' -z 3872854 ']' 00:06:17.865 00:13:48 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.865 00:13:48 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:17.865 00:13:48 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.865 00:13:48 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:17.865 00:13:48 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.865 [2024-10-09 00:13:48.275702] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:06:17.866 [2024-10-09 00:13:48.275772] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3872854 ] 00:06:17.866 [2024-10-09 00:13:48.348921] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.866 [2024-10-09 00:13:48.429111] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:17.866 [2024-10-09 00:13:48.429158] app.c: 614:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3872854' to capture a snapshot of events at runtime. 00:06:17.866 [2024-10-09 00:13:48.429168] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:17.866 [2024-10-09 00:13:48.429177] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:17.866 [2024-10-09 00:13:48.429184] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3872854 for offline analysis/debug. 00:06:17.866 [2024-10-09 00:13:48.429681] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.799 00:13:49 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:18.800 00:13:49 rpc -- common/autotest_common.sh@864 -- # return 0 00:06:18.800 00:13:49 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:06:18.800 00:13:49 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:06:18.800 00:13:49 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:18.800 00:13:49 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:18.800 00:13:49 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:18.800 00:13:49 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:18.800 00:13:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.800 ************************************ 00:06:18.800 START TEST rpc_integrity 00:06:18.800 ************************************ 00:06:18.800 00:13:49 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:06:18.800 00:13:49 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:18.800 00:13:49 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:18.800 00:13:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:18.800 00:13:49 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:18.800 00:13:49 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:18.800 00:13:49 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:18.800 00:13:49 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:18.800 00:13:49 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:18.800 00:13:49 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:18.800 00:13:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:18.800 00:13:49 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:18.800 00:13:49 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:18.800 00:13:49 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:18.800 00:13:49 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:18.800 00:13:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:18.800 00:13:49 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:18.800 00:13:49 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:18.800 { 00:06:18.800 "name": "Malloc0", 00:06:18.800 "aliases": [ 00:06:18.800 "44230ebe-9c35-4519-9b79-8a170ac3da33" 00:06:18.800 ], 00:06:18.800 "product_name": "Malloc disk", 00:06:18.800 "block_size": 512, 00:06:18.800 "num_blocks": 16384, 00:06:18.800 "uuid": "44230ebe-9c35-4519-9b79-8a170ac3da33", 00:06:18.800 "assigned_rate_limits": { 00:06:18.800 "rw_ios_per_sec": 0, 00:06:18.800 "rw_mbytes_per_sec": 0, 00:06:18.800 "r_mbytes_per_sec": 0, 00:06:18.800 "w_mbytes_per_sec": 0 00:06:18.800 }, 00:06:18.800 "claimed": false, 00:06:18.800 "zoned": false, 00:06:18.800 "supported_io_types": { 00:06:18.800 "read": true, 00:06:18.800 "write": true, 00:06:18.800 "unmap": true, 00:06:18.800 "flush": true, 00:06:18.800 "reset": true, 00:06:18.800 "nvme_admin": false, 00:06:18.800 "nvme_io": false, 00:06:18.800 "nvme_io_md": false, 00:06:18.800 "write_zeroes": true, 00:06:18.800 "zcopy": true, 00:06:18.800 "get_zone_info": false, 00:06:18.800 "zone_management": false, 00:06:18.800 "zone_append": false, 00:06:18.800 "compare": false, 00:06:18.800 "compare_and_write": false, 00:06:18.800 "abort": true, 00:06:18.800 "seek_hole": false, 00:06:18.800 "seek_data": false, 00:06:18.800 "copy": true, 00:06:18.800 "nvme_iov_md": false 00:06:18.800 }, 00:06:18.800 "memory_domains": [ 00:06:18.800 { 00:06:18.800 "dma_device_id": "system", 00:06:18.800 "dma_device_type": 1 00:06:18.800 }, 00:06:18.800 { 00:06:18.800 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:18.800 "dma_device_type": 2 00:06:18.800 } 00:06:18.800 ], 00:06:18.800 "driver_specific": {} 00:06:18.800 } 00:06:18.800 ]' 00:06:18.800 00:13:49 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:18.800 00:13:49 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:18.800 00:13:49 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:18.800 00:13:49 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:18.800 00:13:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:18.800 [2024-10-09 00:13:49.319845] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:18.800 [2024-10-09 00:13:49.319883] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:18.800 [2024-10-09 00:13:49.319901] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x5e80d10 00:06:18.800 [2024-10-09 00:13:49.319911] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:18.800 [2024-10-09 00:13:49.320893] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:18.800 [2024-10-09 00:13:49.320919] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:18.800 Passthru0 00:06:18.800 00:13:49 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:18.800 00:13:49 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:18.800 00:13:49 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:18.800 00:13:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:18.800 00:13:49 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:18.800 00:13:49 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:18.800 { 00:06:18.800 "name": "Malloc0", 00:06:18.800 "aliases": [ 00:06:18.800 "44230ebe-9c35-4519-9b79-8a170ac3da33" 00:06:18.800 ], 00:06:18.800 "product_name": "Malloc disk", 00:06:18.800 "block_size": 512, 00:06:18.800 "num_blocks": 16384, 00:06:18.800 "uuid": "44230ebe-9c35-4519-9b79-8a170ac3da33", 00:06:18.800 "assigned_rate_limits": { 00:06:18.800 "rw_ios_per_sec": 0, 00:06:18.800 "rw_mbytes_per_sec": 0, 00:06:18.800 "r_mbytes_per_sec": 0, 00:06:18.800 "w_mbytes_per_sec": 0 00:06:18.800 }, 00:06:18.800 "claimed": true, 00:06:18.800 "claim_type": "exclusive_write", 00:06:18.800 "zoned": false, 00:06:18.800 "supported_io_types": { 00:06:18.800 "read": true, 00:06:18.800 "write": true, 00:06:18.800 "unmap": true, 00:06:18.800 "flush": true, 00:06:18.800 "reset": true, 00:06:18.800 "nvme_admin": false, 00:06:18.800 "nvme_io": false, 00:06:18.800 "nvme_io_md": false, 00:06:18.800 "write_zeroes": true, 00:06:18.800 "zcopy": true, 00:06:18.800 "get_zone_info": false, 00:06:18.800 "zone_management": false, 00:06:18.800 "zone_append": false, 00:06:18.800 "compare": false, 00:06:18.800 "compare_and_write": false, 00:06:18.800 "abort": true, 00:06:18.800 "seek_hole": false, 00:06:18.800 "seek_data": false, 00:06:18.800 "copy": true, 00:06:18.800 "nvme_iov_md": false 00:06:18.800 }, 00:06:18.800 "memory_domains": [ 00:06:18.800 { 00:06:18.800 "dma_device_id": "system", 00:06:18.800 "dma_device_type": 1 00:06:18.800 }, 00:06:18.800 { 00:06:18.800 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:18.800 "dma_device_type": 2 00:06:18.800 } 00:06:18.800 ], 00:06:18.800 "driver_specific": {} 00:06:18.800 }, 00:06:18.800 { 00:06:18.800 "name": "Passthru0", 00:06:18.800 "aliases": [ 00:06:18.800 "3c0b6295-a3f8-52c8-b72d-8f2869f2d37c" 00:06:18.800 ], 00:06:18.800 "product_name": "passthru", 00:06:18.800 "block_size": 512, 00:06:18.800 "num_blocks": 16384, 00:06:18.800 "uuid": "3c0b6295-a3f8-52c8-b72d-8f2869f2d37c", 00:06:18.800 "assigned_rate_limits": { 00:06:18.800 "rw_ios_per_sec": 0, 00:06:18.800 "rw_mbytes_per_sec": 0, 00:06:18.800 "r_mbytes_per_sec": 0, 00:06:18.800 "w_mbytes_per_sec": 0 00:06:18.800 }, 00:06:18.800 "claimed": false, 00:06:18.800 "zoned": false, 00:06:18.800 "supported_io_types": { 00:06:18.800 "read": true, 00:06:18.800 "write": true, 00:06:18.800 "unmap": true, 00:06:18.800 "flush": true, 00:06:18.800 "reset": true, 00:06:18.800 "nvme_admin": false, 00:06:18.800 "nvme_io": false, 00:06:18.800 "nvme_io_md": false, 00:06:18.800 "write_zeroes": true, 00:06:18.800 "zcopy": true, 00:06:18.800 "get_zone_info": false, 00:06:18.800 "zone_management": false, 00:06:18.800 "zone_append": false, 00:06:18.800 "compare": false, 00:06:18.800 "compare_and_write": false, 00:06:18.800 "abort": true, 00:06:18.800 "seek_hole": false, 00:06:18.800 "seek_data": false, 00:06:18.800 "copy": true, 00:06:18.800 "nvme_iov_md": false 00:06:18.800 }, 00:06:18.800 "memory_domains": [ 00:06:18.800 { 00:06:18.800 "dma_device_id": "system", 00:06:18.800 "dma_device_type": 1 00:06:18.800 }, 00:06:18.800 { 00:06:18.800 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:18.800 "dma_device_type": 2 00:06:18.800 } 00:06:18.800 ], 00:06:18.800 "driver_specific": { 00:06:18.800 "passthru": { 00:06:18.800 "name": "Passthru0", 00:06:18.800 "base_bdev_name": "Malloc0" 00:06:18.800 } 00:06:18.800 } 00:06:18.800 } 00:06:18.800 ]' 00:06:18.800 00:13:49 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:18.800 00:13:49 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:18.800 00:13:49 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:18.800 00:13:49 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:18.800 00:13:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:18.800 00:13:49 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:18.800 00:13:49 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:18.800 00:13:49 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:18.800 00:13:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:18.800 00:13:49 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:18.800 00:13:49 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:18.801 00:13:49 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:18.801 00:13:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:18.801 00:13:49 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:18.801 00:13:49 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:18.801 00:13:49 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:19.059 00:13:49 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:19.059 00:06:19.059 real 0m0.290s 00:06:19.059 user 0m0.178s 00:06:19.059 sys 0m0.050s 00:06:19.059 00:13:49 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:19.059 00:13:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:19.059 ************************************ 00:06:19.059 END TEST rpc_integrity 00:06:19.059 ************************************ 00:06:19.059 00:13:49 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:19.059 00:13:49 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:19.059 00:13:49 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:19.059 00:13:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.059 ************************************ 00:06:19.059 START TEST rpc_plugins 00:06:19.059 ************************************ 00:06:19.059 00:13:49 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:06:19.059 00:13:49 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:19.059 00:13:49 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:19.059 00:13:49 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:19.059 00:13:49 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:19.059 00:13:49 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:19.059 00:13:49 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:19.059 00:13:49 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:19.059 00:13:49 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:19.059 00:13:49 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:19.059 00:13:49 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:19.059 { 00:06:19.059 "name": "Malloc1", 00:06:19.059 "aliases": [ 00:06:19.059 "63dbd15c-a844-43f9-becb-1ddd2a79a698" 00:06:19.059 ], 00:06:19.059 "product_name": "Malloc disk", 00:06:19.059 "block_size": 4096, 00:06:19.059 "num_blocks": 256, 00:06:19.059 "uuid": "63dbd15c-a844-43f9-becb-1ddd2a79a698", 00:06:19.059 "assigned_rate_limits": { 00:06:19.059 "rw_ios_per_sec": 0, 00:06:19.059 "rw_mbytes_per_sec": 0, 00:06:19.059 "r_mbytes_per_sec": 0, 00:06:19.059 "w_mbytes_per_sec": 0 00:06:19.059 }, 00:06:19.059 "claimed": false, 00:06:19.059 "zoned": false, 00:06:19.059 "supported_io_types": { 00:06:19.059 "read": true, 00:06:19.059 "write": true, 00:06:19.059 "unmap": true, 00:06:19.059 "flush": true, 00:06:19.059 "reset": true, 00:06:19.059 "nvme_admin": false, 00:06:19.059 "nvme_io": false, 00:06:19.059 "nvme_io_md": false, 00:06:19.059 "write_zeroes": true, 00:06:19.059 "zcopy": true, 00:06:19.059 "get_zone_info": false, 00:06:19.059 "zone_management": false, 00:06:19.059 "zone_append": false, 00:06:19.059 "compare": false, 00:06:19.059 "compare_and_write": false, 00:06:19.059 "abort": true, 00:06:19.059 "seek_hole": false, 00:06:19.059 "seek_data": false, 00:06:19.059 "copy": true, 00:06:19.059 "nvme_iov_md": false 00:06:19.059 }, 00:06:19.059 "memory_domains": [ 00:06:19.059 { 00:06:19.059 "dma_device_id": "system", 00:06:19.059 "dma_device_type": 1 00:06:19.059 }, 00:06:19.059 { 00:06:19.059 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:19.059 "dma_device_type": 2 00:06:19.059 } 00:06:19.059 ], 00:06:19.059 "driver_specific": {} 00:06:19.059 } 00:06:19.059 ]' 00:06:19.059 00:13:49 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:19.059 00:13:49 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:19.059 00:13:49 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:19.059 00:13:49 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:19.059 00:13:49 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:19.060 00:13:49 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:19.060 00:13:49 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:19.060 00:13:49 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:19.060 00:13:49 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:19.060 00:13:49 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:19.060 00:13:49 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:19.060 00:13:49 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:19.318 00:13:49 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:19.318 00:06:19.318 real 0m0.147s 00:06:19.318 user 0m0.091s 00:06:19.318 sys 0m0.023s 00:06:19.318 00:13:49 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:19.318 00:13:49 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:19.318 ************************************ 00:06:19.318 END TEST rpc_plugins 00:06:19.318 ************************************ 00:06:19.318 00:13:49 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:19.318 00:13:49 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:19.318 00:13:49 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:19.318 00:13:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.318 ************************************ 00:06:19.318 START TEST rpc_trace_cmd_test 00:06:19.318 ************************************ 00:06:19.318 00:13:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:06:19.318 00:13:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:19.318 00:13:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:19.318 00:13:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:19.318 00:13:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:19.318 00:13:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:19.318 00:13:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:19.318 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3872854", 00:06:19.318 "tpoint_group_mask": "0x8", 00:06:19.318 "iscsi_conn": { 00:06:19.318 "mask": "0x2", 00:06:19.318 "tpoint_mask": "0x0" 00:06:19.318 }, 00:06:19.318 "scsi": { 00:06:19.318 "mask": "0x4", 00:06:19.318 "tpoint_mask": "0x0" 00:06:19.318 }, 00:06:19.318 "bdev": { 00:06:19.318 "mask": "0x8", 00:06:19.318 "tpoint_mask": "0xffffffffffffffff" 00:06:19.318 }, 00:06:19.318 "nvmf_rdma": { 00:06:19.318 "mask": "0x10", 00:06:19.318 "tpoint_mask": "0x0" 00:06:19.318 }, 00:06:19.318 "nvmf_tcp": { 00:06:19.318 "mask": "0x20", 00:06:19.318 "tpoint_mask": "0x0" 00:06:19.318 }, 00:06:19.318 "ftl": { 00:06:19.318 "mask": "0x40", 00:06:19.318 "tpoint_mask": "0x0" 00:06:19.318 }, 00:06:19.318 "blobfs": { 00:06:19.318 "mask": "0x80", 00:06:19.318 "tpoint_mask": "0x0" 00:06:19.318 }, 00:06:19.318 "dsa": { 00:06:19.318 "mask": "0x200", 00:06:19.318 "tpoint_mask": "0x0" 00:06:19.318 }, 00:06:19.318 "thread": { 00:06:19.318 "mask": "0x400", 00:06:19.318 "tpoint_mask": "0x0" 00:06:19.318 }, 00:06:19.318 "nvme_pcie": { 00:06:19.318 "mask": "0x800", 00:06:19.318 "tpoint_mask": "0x0" 00:06:19.318 }, 00:06:19.318 "iaa": { 00:06:19.318 "mask": "0x1000", 00:06:19.318 "tpoint_mask": "0x0" 00:06:19.318 }, 00:06:19.318 "nvme_tcp": { 00:06:19.318 "mask": "0x2000", 00:06:19.318 "tpoint_mask": "0x0" 00:06:19.318 }, 00:06:19.318 "bdev_nvme": { 00:06:19.318 "mask": "0x4000", 00:06:19.318 "tpoint_mask": "0x0" 00:06:19.318 }, 00:06:19.318 "sock": { 00:06:19.318 "mask": "0x8000", 00:06:19.318 "tpoint_mask": "0x0" 00:06:19.318 }, 00:06:19.318 "blob": { 00:06:19.318 "mask": "0x10000", 00:06:19.318 "tpoint_mask": "0x0" 00:06:19.318 }, 00:06:19.318 "bdev_raid": { 00:06:19.318 "mask": "0x20000", 00:06:19.318 "tpoint_mask": "0x0" 00:06:19.318 }, 00:06:19.318 "scheduler": { 00:06:19.318 "mask": "0x40000", 00:06:19.318 "tpoint_mask": "0x0" 00:06:19.318 } 00:06:19.318 }' 00:06:19.318 00:13:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:19.318 00:13:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:06:19.318 00:13:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:19.318 00:13:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:19.318 00:13:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:19.318 00:13:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:19.319 00:13:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:19.577 00:13:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:19.577 00:13:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:19.577 00:13:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:19.577 00:06:19.577 real 0m0.223s 00:06:19.577 user 0m0.173s 00:06:19.577 sys 0m0.041s 00:06:19.577 00:13:50 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:19.577 00:13:50 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:19.577 ************************************ 00:06:19.577 END TEST rpc_trace_cmd_test 00:06:19.577 ************************************ 00:06:19.577 00:13:50 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:19.577 00:13:50 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:19.577 00:13:50 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:19.577 00:13:50 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:19.577 00:13:50 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:19.577 00:13:50 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.577 ************************************ 00:06:19.577 START TEST rpc_daemon_integrity 00:06:19.577 ************************************ 00:06:19.577 00:13:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:06:19.577 00:13:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:19.577 00:13:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:19.577 00:13:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:19.577 00:13:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:19.577 00:13:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:19.577 00:13:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:19.577 00:13:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:19.577 00:13:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:19.577 00:13:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:19.577 00:13:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:19.577 00:13:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:19.577 00:13:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:19.577 00:13:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:19.577 00:13:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:19.577 00:13:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:19.577 00:13:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:19.577 00:13:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:19.577 { 00:06:19.577 "name": "Malloc2", 00:06:19.577 "aliases": [ 00:06:19.577 "265c588f-6c3a-490d-905c-7c073f64978e" 00:06:19.578 ], 00:06:19.578 "product_name": "Malloc disk", 00:06:19.578 "block_size": 512, 00:06:19.578 "num_blocks": 16384, 00:06:19.578 "uuid": "265c588f-6c3a-490d-905c-7c073f64978e", 00:06:19.578 "assigned_rate_limits": { 00:06:19.578 "rw_ios_per_sec": 0, 00:06:19.578 "rw_mbytes_per_sec": 0, 00:06:19.578 "r_mbytes_per_sec": 0, 00:06:19.578 "w_mbytes_per_sec": 0 00:06:19.578 }, 00:06:19.578 "claimed": false, 00:06:19.578 "zoned": false, 00:06:19.578 "supported_io_types": { 00:06:19.578 "read": true, 00:06:19.578 "write": true, 00:06:19.578 "unmap": true, 00:06:19.578 "flush": true, 00:06:19.578 "reset": true, 00:06:19.578 "nvme_admin": false, 00:06:19.578 "nvme_io": false, 00:06:19.578 "nvme_io_md": false, 00:06:19.578 "write_zeroes": true, 00:06:19.578 "zcopy": true, 00:06:19.578 "get_zone_info": false, 00:06:19.578 "zone_management": false, 00:06:19.578 "zone_append": false, 00:06:19.578 "compare": false, 00:06:19.578 "compare_and_write": false, 00:06:19.578 "abort": true, 00:06:19.578 "seek_hole": false, 00:06:19.578 "seek_data": false, 00:06:19.578 "copy": true, 00:06:19.578 "nvme_iov_md": false 00:06:19.578 }, 00:06:19.578 "memory_domains": [ 00:06:19.578 { 00:06:19.578 "dma_device_id": "system", 00:06:19.578 "dma_device_type": 1 00:06:19.578 }, 00:06:19.578 { 00:06:19.578 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:19.578 "dma_device_type": 2 00:06:19.578 } 00:06:19.578 ], 00:06:19.578 "driver_specific": {} 00:06:19.578 } 00:06:19.578 ]' 00:06:19.578 00:13:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:19.836 00:13:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:19.836 00:13:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:19.836 00:13:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:19.836 00:13:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:19.836 [2024-10-09 00:13:50.246250] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:19.836 [2024-10-09 00:13:50.246289] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:19.836 [2024-10-09 00:13:50.246309] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x5fa1d20 00:06:19.836 [2024-10-09 00:13:50.246319] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:19.836 [2024-10-09 00:13:50.247132] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:19.836 [2024-10-09 00:13:50.247157] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:19.836 Passthru0 00:06:19.836 00:13:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:19.836 00:13:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:19.836 00:13:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:19.836 00:13:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:19.836 00:13:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:19.836 00:13:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:19.836 { 00:06:19.836 "name": "Malloc2", 00:06:19.836 "aliases": [ 00:06:19.836 "265c588f-6c3a-490d-905c-7c073f64978e" 00:06:19.836 ], 00:06:19.836 "product_name": "Malloc disk", 00:06:19.836 "block_size": 512, 00:06:19.836 "num_blocks": 16384, 00:06:19.836 "uuid": "265c588f-6c3a-490d-905c-7c073f64978e", 00:06:19.836 "assigned_rate_limits": { 00:06:19.836 "rw_ios_per_sec": 0, 00:06:19.836 "rw_mbytes_per_sec": 0, 00:06:19.836 "r_mbytes_per_sec": 0, 00:06:19.836 "w_mbytes_per_sec": 0 00:06:19.836 }, 00:06:19.836 "claimed": true, 00:06:19.837 "claim_type": "exclusive_write", 00:06:19.837 "zoned": false, 00:06:19.837 "supported_io_types": { 00:06:19.837 "read": true, 00:06:19.837 "write": true, 00:06:19.837 "unmap": true, 00:06:19.837 "flush": true, 00:06:19.837 "reset": true, 00:06:19.837 "nvme_admin": false, 00:06:19.837 "nvme_io": false, 00:06:19.837 "nvme_io_md": false, 00:06:19.837 "write_zeroes": true, 00:06:19.837 "zcopy": true, 00:06:19.837 "get_zone_info": false, 00:06:19.837 "zone_management": false, 00:06:19.837 "zone_append": false, 00:06:19.837 "compare": false, 00:06:19.837 "compare_and_write": false, 00:06:19.837 "abort": true, 00:06:19.837 "seek_hole": false, 00:06:19.837 "seek_data": false, 00:06:19.837 "copy": true, 00:06:19.837 "nvme_iov_md": false 00:06:19.837 }, 00:06:19.837 "memory_domains": [ 00:06:19.837 { 00:06:19.837 "dma_device_id": "system", 00:06:19.837 "dma_device_type": 1 00:06:19.837 }, 00:06:19.837 { 00:06:19.837 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:19.837 "dma_device_type": 2 00:06:19.837 } 00:06:19.837 ], 00:06:19.837 "driver_specific": {} 00:06:19.837 }, 00:06:19.837 { 00:06:19.837 "name": "Passthru0", 00:06:19.837 "aliases": [ 00:06:19.837 "2c2b95fd-dfe2-514d-946d-f28c04667eb3" 00:06:19.837 ], 00:06:19.837 "product_name": "passthru", 00:06:19.837 "block_size": 512, 00:06:19.837 "num_blocks": 16384, 00:06:19.837 "uuid": "2c2b95fd-dfe2-514d-946d-f28c04667eb3", 00:06:19.837 "assigned_rate_limits": { 00:06:19.837 "rw_ios_per_sec": 0, 00:06:19.837 "rw_mbytes_per_sec": 0, 00:06:19.837 "r_mbytes_per_sec": 0, 00:06:19.837 "w_mbytes_per_sec": 0 00:06:19.837 }, 00:06:19.837 "claimed": false, 00:06:19.837 "zoned": false, 00:06:19.837 "supported_io_types": { 00:06:19.837 "read": true, 00:06:19.837 "write": true, 00:06:19.837 "unmap": true, 00:06:19.837 "flush": true, 00:06:19.837 "reset": true, 00:06:19.837 "nvme_admin": false, 00:06:19.837 "nvme_io": false, 00:06:19.837 "nvme_io_md": false, 00:06:19.837 "write_zeroes": true, 00:06:19.837 "zcopy": true, 00:06:19.837 "get_zone_info": false, 00:06:19.837 "zone_management": false, 00:06:19.837 "zone_append": false, 00:06:19.837 "compare": false, 00:06:19.837 "compare_and_write": false, 00:06:19.837 "abort": true, 00:06:19.837 "seek_hole": false, 00:06:19.837 "seek_data": false, 00:06:19.837 "copy": true, 00:06:19.837 "nvme_iov_md": false 00:06:19.837 }, 00:06:19.837 "memory_domains": [ 00:06:19.837 { 00:06:19.837 "dma_device_id": "system", 00:06:19.837 "dma_device_type": 1 00:06:19.837 }, 00:06:19.837 { 00:06:19.837 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:19.837 "dma_device_type": 2 00:06:19.837 } 00:06:19.837 ], 00:06:19.837 "driver_specific": { 00:06:19.837 "passthru": { 00:06:19.837 "name": "Passthru0", 00:06:19.837 "base_bdev_name": "Malloc2" 00:06:19.837 } 00:06:19.837 } 00:06:19.837 } 00:06:19.837 ]' 00:06:19.837 00:13:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:19.837 00:13:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:19.837 00:13:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:19.837 00:13:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:19.837 00:13:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:19.837 00:13:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:19.837 00:13:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:19.837 00:13:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:19.837 00:13:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:19.837 00:13:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:19.837 00:13:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:19.837 00:13:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:19.837 00:13:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:19.837 00:13:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:19.837 00:13:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:19.837 00:13:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:19.837 00:13:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:19.837 00:06:19.837 real 0m0.299s 00:06:19.837 user 0m0.173s 00:06:19.837 sys 0m0.062s 00:06:19.837 00:13:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:19.837 00:13:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:19.837 ************************************ 00:06:19.837 END TEST rpc_daemon_integrity 00:06:19.837 ************************************ 00:06:19.837 00:13:50 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:19.837 00:13:50 rpc -- rpc/rpc.sh@84 -- # killprocess 3872854 00:06:19.837 00:13:50 rpc -- common/autotest_common.sh@950 -- # '[' -z 3872854 ']' 00:06:19.837 00:13:50 rpc -- common/autotest_common.sh@954 -- # kill -0 3872854 00:06:19.837 00:13:50 rpc -- common/autotest_common.sh@955 -- # uname 00:06:19.837 00:13:50 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:19.837 00:13:50 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3872854 00:06:20.096 00:13:50 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:20.096 00:13:50 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:20.096 00:13:50 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3872854' 00:06:20.096 killing process with pid 3872854 00:06:20.096 00:13:50 rpc -- common/autotest_common.sh@969 -- # kill 3872854 00:06:20.096 00:13:50 rpc -- common/autotest_common.sh@974 -- # wait 3872854 00:06:20.354 00:06:20.354 real 0m2.749s 00:06:20.354 user 0m3.452s 00:06:20.354 sys 0m0.857s 00:06:20.354 00:13:50 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:20.354 00:13:50 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.354 ************************************ 00:06:20.354 END TEST rpc 00:06:20.354 ************************************ 00:06:20.354 00:13:50 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:20.354 00:13:50 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:20.354 00:13:50 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:20.354 00:13:50 -- common/autotest_common.sh@10 -- # set +x 00:06:20.354 ************************************ 00:06:20.354 START TEST skip_rpc 00:06:20.354 ************************************ 00:06:20.354 00:13:50 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:20.614 * Looking for test storage... 00:06:20.614 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:06:20.614 00:13:51 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:20.614 00:13:51 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:06:20.614 00:13:51 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:20.614 00:13:51 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:20.614 00:13:51 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:20.614 00:13:51 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:20.614 00:13:51 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:20.614 00:13:51 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:20.614 00:13:51 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:20.614 00:13:51 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:20.614 00:13:51 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:20.614 00:13:51 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:20.614 00:13:51 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:20.614 00:13:51 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:20.614 00:13:51 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:20.614 00:13:51 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:20.614 00:13:51 skip_rpc -- scripts/common.sh@345 -- # : 1 00:06:20.614 00:13:51 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:20.614 00:13:51 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:20.614 00:13:51 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:20.614 00:13:51 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:06:20.614 00:13:51 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:20.614 00:13:51 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:06:20.614 00:13:51 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:20.614 00:13:51 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:20.614 00:13:51 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:06:20.614 00:13:51 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:20.614 00:13:51 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:06:20.614 00:13:51 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:20.614 00:13:51 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:20.614 00:13:51 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:20.614 00:13:51 skip_rpc -- scripts/common.sh@368 -- # return 0 00:06:20.615 00:13:51 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:20.615 00:13:51 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:20.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.615 --rc genhtml_branch_coverage=1 00:06:20.615 --rc genhtml_function_coverage=1 00:06:20.615 --rc genhtml_legend=1 00:06:20.615 --rc geninfo_all_blocks=1 00:06:20.615 --rc geninfo_unexecuted_blocks=1 00:06:20.615 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:20.615 ' 00:06:20.615 00:13:51 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:20.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.615 --rc genhtml_branch_coverage=1 00:06:20.615 --rc genhtml_function_coverage=1 00:06:20.615 --rc genhtml_legend=1 00:06:20.615 --rc geninfo_all_blocks=1 00:06:20.615 --rc geninfo_unexecuted_blocks=1 00:06:20.615 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:20.615 ' 00:06:20.615 00:13:51 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:20.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.615 --rc genhtml_branch_coverage=1 00:06:20.615 --rc genhtml_function_coverage=1 00:06:20.615 --rc genhtml_legend=1 00:06:20.615 --rc geninfo_all_blocks=1 00:06:20.615 --rc geninfo_unexecuted_blocks=1 00:06:20.615 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:20.615 ' 00:06:20.615 00:13:51 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:20.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.615 --rc genhtml_branch_coverage=1 00:06:20.615 --rc genhtml_function_coverage=1 00:06:20.615 --rc genhtml_legend=1 00:06:20.615 --rc geninfo_all_blocks=1 00:06:20.615 --rc geninfo_unexecuted_blocks=1 00:06:20.615 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:20.615 ' 00:06:20.615 00:13:51 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:06:20.615 00:13:51 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:06:20.615 00:13:51 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:20.615 00:13:51 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:20.615 00:13:51 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:20.615 00:13:51 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.615 ************************************ 00:06:20.615 START TEST skip_rpc 00:06:20.615 ************************************ 00:06:20.615 00:13:51 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:06:20.615 00:13:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3873388 00:06:20.615 00:13:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:20.615 00:13:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:20.615 00:13:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:20.615 [2024-10-09 00:13:51.165995] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:06:20.615 [2024-10-09 00:13:51.166056] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3873388 ] 00:06:20.615 [2024-10-09 00:13:51.236248] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.874 [2024-10-09 00:13:51.319614] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.157 00:13:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:26.157 00:13:56 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:26.157 00:13:56 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:26.157 00:13:56 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:26.157 00:13:56 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:26.157 00:13:56 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:26.157 00:13:56 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:26.157 00:13:56 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:06:26.157 00:13:56 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.157 00:13:56 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:26.157 00:13:56 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:26.157 00:13:56 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:26.157 00:13:56 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:26.157 00:13:56 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:26.157 00:13:56 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:26.157 00:13:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:26.157 00:13:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3873388 00:06:26.157 00:13:56 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 3873388 ']' 00:06:26.157 00:13:56 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 3873388 00:06:26.157 00:13:56 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:06:26.157 00:13:56 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:26.157 00:13:56 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3873388 00:06:26.157 00:13:56 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:26.157 00:13:56 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:26.157 00:13:56 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3873388' 00:06:26.157 killing process with pid 3873388 00:06:26.157 00:13:56 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 3873388 00:06:26.157 00:13:56 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 3873388 00:06:26.157 00:06:26.157 real 0m5.434s 00:06:26.157 user 0m5.155s 00:06:26.157 sys 0m0.325s 00:06:26.157 00:13:56 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:26.157 00:13:56 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:26.157 ************************************ 00:06:26.157 END TEST skip_rpc 00:06:26.157 ************************************ 00:06:26.157 00:13:56 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:26.157 00:13:56 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:26.157 00:13:56 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:26.157 00:13:56 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:26.157 ************************************ 00:06:26.157 START TEST skip_rpc_with_json 00:06:26.157 ************************************ 00:06:26.157 00:13:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:06:26.157 00:13:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:26.157 00:13:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3874124 00:06:26.157 00:13:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:26.157 00:13:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:26.157 00:13:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3874124 00:06:26.157 00:13:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 3874124 ']' 00:06:26.157 00:13:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.157 00:13:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:26.157 00:13:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.157 00:13:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:26.157 00:13:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:26.157 [2024-10-09 00:13:56.665342] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:06:26.157 [2024-10-09 00:13:56.665400] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3874124 ] 00:06:26.157 [2024-10-09 00:13:56.737482] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.416 [2024-10-09 00:13:56.830495] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.984 00:13:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:26.984 00:13:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:06:26.984 00:13:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:26.984 00:13:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.984 00:13:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:26.984 [2024-10-09 00:13:57.529788] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:26.984 request: 00:06:26.984 { 00:06:26.984 "trtype": "tcp", 00:06:26.984 "method": "nvmf_get_transports", 00:06:26.984 "req_id": 1 00:06:26.984 } 00:06:26.984 Got JSON-RPC error response 00:06:26.984 response: 00:06:26.984 { 00:06:26.984 "code": -19, 00:06:26.984 "message": "No such device" 00:06:26.984 } 00:06:26.984 00:13:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:26.984 00:13:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:26.984 00:13:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.984 00:13:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:26.984 [2024-10-09 00:13:57.541897] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:26.984 00:13:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:26.984 00:13:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:26.984 00:13:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.984 00:13:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:27.244 00:13:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.244 00:13:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:06:27.244 { 00:06:27.244 "subsystems": [ 00:06:27.244 { 00:06:27.244 "subsystem": "scheduler", 00:06:27.244 "config": [ 00:06:27.244 { 00:06:27.244 "method": "framework_set_scheduler", 00:06:27.244 "params": { 00:06:27.244 "name": "static" 00:06:27.244 } 00:06:27.244 } 00:06:27.244 ] 00:06:27.244 }, 00:06:27.244 { 00:06:27.244 "subsystem": "vmd", 00:06:27.244 "config": [] 00:06:27.244 }, 00:06:27.244 { 00:06:27.244 "subsystem": "sock", 00:06:27.244 "config": [ 00:06:27.244 { 00:06:27.244 "method": "sock_set_default_impl", 00:06:27.244 "params": { 00:06:27.244 "impl_name": "posix" 00:06:27.244 } 00:06:27.244 }, 00:06:27.244 { 00:06:27.244 "method": "sock_impl_set_options", 00:06:27.244 "params": { 00:06:27.244 "impl_name": "ssl", 00:06:27.244 "recv_buf_size": 4096, 00:06:27.244 "send_buf_size": 4096, 00:06:27.244 "enable_recv_pipe": true, 00:06:27.244 "enable_quickack": false, 00:06:27.244 "enable_placement_id": 0, 00:06:27.244 "enable_zerocopy_send_server": true, 00:06:27.244 "enable_zerocopy_send_client": false, 00:06:27.244 "zerocopy_threshold": 0, 00:06:27.244 "tls_version": 0, 00:06:27.244 "enable_ktls": false 00:06:27.244 } 00:06:27.244 }, 00:06:27.244 { 00:06:27.244 "method": "sock_impl_set_options", 00:06:27.244 "params": { 00:06:27.244 "impl_name": "posix", 00:06:27.244 "recv_buf_size": 2097152, 00:06:27.244 "send_buf_size": 2097152, 00:06:27.244 "enable_recv_pipe": true, 00:06:27.244 "enable_quickack": false, 00:06:27.244 "enable_placement_id": 0, 00:06:27.244 "enable_zerocopy_send_server": true, 00:06:27.244 "enable_zerocopy_send_client": false, 00:06:27.244 "zerocopy_threshold": 0, 00:06:27.244 "tls_version": 0, 00:06:27.244 "enable_ktls": false 00:06:27.244 } 00:06:27.244 } 00:06:27.244 ] 00:06:27.244 }, 00:06:27.244 { 00:06:27.244 "subsystem": "iobuf", 00:06:27.244 "config": [ 00:06:27.244 { 00:06:27.244 "method": "iobuf_set_options", 00:06:27.244 "params": { 00:06:27.244 "small_pool_count": 8192, 00:06:27.244 "large_pool_count": 1024, 00:06:27.244 "small_bufsize": 8192, 00:06:27.244 "large_bufsize": 135168 00:06:27.244 } 00:06:27.244 } 00:06:27.244 ] 00:06:27.244 }, 00:06:27.244 { 00:06:27.245 "subsystem": "keyring", 00:06:27.245 "config": [] 00:06:27.245 }, 00:06:27.245 { 00:06:27.245 "subsystem": "vfio_user_target", 00:06:27.245 "config": null 00:06:27.245 }, 00:06:27.245 { 00:06:27.245 "subsystem": "fsdev", 00:06:27.245 "config": [ 00:06:27.245 { 00:06:27.245 "method": "fsdev_set_opts", 00:06:27.245 "params": { 00:06:27.245 "fsdev_io_pool_size": 65535, 00:06:27.245 "fsdev_io_cache_size": 256 00:06:27.245 } 00:06:27.245 } 00:06:27.245 ] 00:06:27.245 }, 00:06:27.245 { 00:06:27.245 "subsystem": "accel", 00:06:27.245 "config": [ 00:06:27.245 { 00:06:27.245 "method": "accel_set_options", 00:06:27.245 "params": { 00:06:27.245 "small_cache_size": 128, 00:06:27.245 "large_cache_size": 16, 00:06:27.245 "task_count": 2048, 00:06:27.245 "sequence_count": 2048, 00:06:27.245 "buf_count": 2048 00:06:27.245 } 00:06:27.245 } 00:06:27.245 ] 00:06:27.245 }, 00:06:27.245 { 00:06:27.245 "subsystem": "bdev", 00:06:27.245 "config": [ 00:06:27.245 { 00:06:27.245 "method": "bdev_set_options", 00:06:27.245 "params": { 00:06:27.245 "bdev_io_pool_size": 65535, 00:06:27.245 "bdev_io_cache_size": 256, 00:06:27.245 "bdev_auto_examine": true, 00:06:27.245 "iobuf_small_cache_size": 128, 00:06:27.245 "iobuf_large_cache_size": 16 00:06:27.245 } 00:06:27.245 }, 00:06:27.245 { 00:06:27.245 "method": "bdev_raid_set_options", 00:06:27.245 "params": { 00:06:27.245 "process_window_size_kb": 1024, 00:06:27.245 "process_max_bandwidth_mb_sec": 0 00:06:27.245 } 00:06:27.245 }, 00:06:27.245 { 00:06:27.245 "method": "bdev_nvme_set_options", 00:06:27.245 "params": { 00:06:27.245 "action_on_timeout": "none", 00:06:27.245 "timeout_us": 0, 00:06:27.245 "timeout_admin_us": 0, 00:06:27.245 "keep_alive_timeout_ms": 10000, 00:06:27.245 "arbitration_burst": 0, 00:06:27.245 "low_priority_weight": 0, 00:06:27.245 "medium_priority_weight": 0, 00:06:27.245 "high_priority_weight": 0, 00:06:27.245 "nvme_adminq_poll_period_us": 10000, 00:06:27.245 "nvme_ioq_poll_period_us": 0, 00:06:27.245 "io_queue_requests": 0, 00:06:27.245 "delay_cmd_submit": true, 00:06:27.245 "transport_retry_count": 4, 00:06:27.245 "bdev_retry_count": 3, 00:06:27.245 "transport_ack_timeout": 0, 00:06:27.245 "ctrlr_loss_timeout_sec": 0, 00:06:27.245 "reconnect_delay_sec": 0, 00:06:27.245 "fast_io_fail_timeout_sec": 0, 00:06:27.245 "disable_auto_failback": false, 00:06:27.245 "generate_uuids": false, 00:06:27.245 "transport_tos": 0, 00:06:27.245 "nvme_error_stat": false, 00:06:27.245 "rdma_srq_size": 0, 00:06:27.245 "io_path_stat": false, 00:06:27.245 "allow_accel_sequence": false, 00:06:27.245 "rdma_max_cq_size": 0, 00:06:27.245 "rdma_cm_event_timeout_ms": 0, 00:06:27.245 "dhchap_digests": [ 00:06:27.245 "sha256", 00:06:27.245 "sha384", 00:06:27.245 "sha512" 00:06:27.245 ], 00:06:27.245 "dhchap_dhgroups": [ 00:06:27.245 "null", 00:06:27.245 "ffdhe2048", 00:06:27.245 "ffdhe3072", 00:06:27.245 "ffdhe4096", 00:06:27.245 "ffdhe6144", 00:06:27.245 "ffdhe8192" 00:06:27.245 ] 00:06:27.245 } 00:06:27.245 }, 00:06:27.245 { 00:06:27.245 "method": "bdev_nvme_set_hotplug", 00:06:27.245 "params": { 00:06:27.245 "period_us": 100000, 00:06:27.245 "enable": false 00:06:27.245 } 00:06:27.245 }, 00:06:27.245 { 00:06:27.245 "method": "bdev_iscsi_set_options", 00:06:27.245 "params": { 00:06:27.245 "timeout_sec": 30 00:06:27.245 } 00:06:27.245 }, 00:06:27.245 { 00:06:27.245 "method": "bdev_wait_for_examine" 00:06:27.245 } 00:06:27.245 ] 00:06:27.245 }, 00:06:27.245 { 00:06:27.245 "subsystem": "nvmf", 00:06:27.245 "config": [ 00:06:27.245 { 00:06:27.245 "method": "nvmf_set_config", 00:06:27.245 "params": { 00:06:27.245 "discovery_filter": "match_any", 00:06:27.245 "admin_cmd_passthru": { 00:06:27.245 "identify_ctrlr": false 00:06:27.245 }, 00:06:27.245 "dhchap_digests": [ 00:06:27.245 "sha256", 00:06:27.245 "sha384", 00:06:27.245 "sha512" 00:06:27.245 ], 00:06:27.245 "dhchap_dhgroups": [ 00:06:27.245 "null", 00:06:27.245 "ffdhe2048", 00:06:27.245 "ffdhe3072", 00:06:27.245 "ffdhe4096", 00:06:27.245 "ffdhe6144", 00:06:27.245 "ffdhe8192" 00:06:27.245 ] 00:06:27.245 } 00:06:27.245 }, 00:06:27.245 { 00:06:27.245 "method": "nvmf_set_max_subsystems", 00:06:27.245 "params": { 00:06:27.245 "max_subsystems": 1024 00:06:27.245 } 00:06:27.245 }, 00:06:27.245 { 00:06:27.245 "method": "nvmf_set_crdt", 00:06:27.245 "params": { 00:06:27.245 "crdt1": 0, 00:06:27.245 "crdt2": 0, 00:06:27.245 "crdt3": 0 00:06:27.245 } 00:06:27.245 }, 00:06:27.245 { 00:06:27.245 "method": "nvmf_create_transport", 00:06:27.245 "params": { 00:06:27.245 "trtype": "TCP", 00:06:27.245 "max_queue_depth": 128, 00:06:27.245 "max_io_qpairs_per_ctrlr": 127, 00:06:27.245 "in_capsule_data_size": 4096, 00:06:27.245 "max_io_size": 131072, 00:06:27.245 "io_unit_size": 131072, 00:06:27.245 "max_aq_depth": 128, 00:06:27.245 "num_shared_buffers": 511, 00:06:27.245 "buf_cache_size": 4294967295, 00:06:27.245 "dif_insert_or_strip": false, 00:06:27.245 "zcopy": false, 00:06:27.245 "c2h_success": true, 00:06:27.245 "sock_priority": 0, 00:06:27.245 "abort_timeout_sec": 1, 00:06:27.245 "ack_timeout": 0, 00:06:27.245 "data_wr_pool_size": 0 00:06:27.245 } 00:06:27.245 } 00:06:27.245 ] 00:06:27.245 }, 00:06:27.245 { 00:06:27.245 "subsystem": "nbd", 00:06:27.245 "config": [] 00:06:27.245 }, 00:06:27.245 { 00:06:27.245 "subsystem": "ublk", 00:06:27.245 "config": [] 00:06:27.245 }, 00:06:27.245 { 00:06:27.245 "subsystem": "vhost_blk", 00:06:27.245 "config": [] 00:06:27.245 }, 00:06:27.245 { 00:06:27.245 "subsystem": "scsi", 00:06:27.245 "config": null 00:06:27.245 }, 00:06:27.245 { 00:06:27.245 "subsystem": "iscsi", 00:06:27.245 "config": [ 00:06:27.245 { 00:06:27.245 "method": "iscsi_set_options", 00:06:27.245 "params": { 00:06:27.245 "node_base": "iqn.2016-06.io.spdk", 00:06:27.245 "max_sessions": 128, 00:06:27.245 "max_connections_per_session": 2, 00:06:27.245 "max_queue_depth": 64, 00:06:27.245 "default_time2wait": 2, 00:06:27.245 "default_time2retain": 20, 00:06:27.245 "first_burst_length": 8192, 00:06:27.245 "immediate_data": true, 00:06:27.245 "allow_duplicated_isid": false, 00:06:27.245 "error_recovery_level": 0, 00:06:27.245 "nop_timeout": 60, 00:06:27.245 "nop_in_interval": 30, 00:06:27.245 "disable_chap": false, 00:06:27.245 "require_chap": false, 00:06:27.245 "mutual_chap": false, 00:06:27.245 "chap_group": 0, 00:06:27.245 "max_large_datain_per_connection": 64, 00:06:27.245 "max_r2t_per_connection": 4, 00:06:27.245 "pdu_pool_size": 36864, 00:06:27.245 "immediate_data_pool_size": 16384, 00:06:27.245 "data_out_pool_size": 2048 00:06:27.245 } 00:06:27.245 } 00:06:27.245 ] 00:06:27.245 }, 00:06:27.245 { 00:06:27.245 "subsystem": "vhost_scsi", 00:06:27.245 "config": [] 00:06:27.245 } 00:06:27.245 ] 00:06:27.245 } 00:06:27.245 00:13:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:27.245 00:13:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3874124 00:06:27.245 00:13:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 3874124 ']' 00:06:27.245 00:13:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 3874124 00:06:27.245 00:13:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:27.245 00:13:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:27.245 00:13:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3874124 00:06:27.245 00:13:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:27.245 00:13:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:27.245 00:13:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3874124' 00:06:27.245 killing process with pid 3874124 00:06:27.245 00:13:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 3874124 00:06:27.245 00:13:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 3874124 00:06:27.504 00:13:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3874309 00:06:27.504 00:13:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:06:27.504 00:13:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:32.776 00:14:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3874309 00:06:32.776 00:14:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 3874309 ']' 00:06:32.776 00:14:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 3874309 00:06:32.776 00:14:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:32.776 00:14:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:32.776 00:14:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3874309 00:06:32.776 00:14:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:32.776 00:14:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:32.776 00:14:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3874309' 00:06:32.776 killing process with pid 3874309 00:06:32.776 00:14:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 3874309 00:06:32.776 00:14:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 3874309 00:06:33.036 00:14:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:06:33.036 00:14:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:06:33.036 00:06:33.036 real 0m6.935s 00:06:33.036 user 0m6.700s 00:06:33.036 sys 0m0.738s 00:06:33.036 00:14:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:33.036 00:14:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:33.036 ************************************ 00:06:33.036 END TEST skip_rpc_with_json 00:06:33.036 ************************************ 00:06:33.036 00:14:03 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:33.036 00:14:03 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:33.036 00:14:03 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:33.036 00:14:03 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:33.036 ************************************ 00:06:33.036 START TEST skip_rpc_with_delay 00:06:33.036 ************************************ 00:06:33.036 00:14:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:06:33.036 00:14:03 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:33.036 00:14:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:06:33.036 00:14:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:33.036 00:14:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:33.036 00:14:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:33.036 00:14:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:33.036 00:14:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:33.036 00:14:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:33.036 00:14:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:33.036 00:14:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:33.036 00:14:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:33.036 00:14:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:33.295 [2024-10-09 00:14:03.679947] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:33.295 [2024-10-09 00:14:03.680068] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:33.295 00:14:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:06:33.295 00:14:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:33.295 00:14:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:33.295 00:14:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:33.295 00:06:33.295 real 0m0.049s 00:06:33.295 user 0m0.025s 00:06:33.295 sys 0m0.024s 00:06:33.295 00:14:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:33.295 00:14:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:33.295 ************************************ 00:06:33.295 END TEST skip_rpc_with_delay 00:06:33.295 ************************************ 00:06:33.295 00:14:03 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:33.295 00:14:03 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:33.295 00:14:03 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:33.295 00:14:03 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:33.295 00:14:03 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:33.295 00:14:03 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:33.295 ************************************ 00:06:33.295 START TEST exit_on_failed_rpc_init 00:06:33.295 ************************************ 00:06:33.295 00:14:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:06:33.295 00:14:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3875124 00:06:33.295 00:14:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:33.295 00:14:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3875124 00:06:33.295 00:14:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 3875124 ']' 00:06:33.295 00:14:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.295 00:14:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:33.295 00:14:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.295 00:14:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:33.295 00:14:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:33.295 [2024-10-09 00:14:03.795461] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:06:33.295 [2024-10-09 00:14:03.795518] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3875124 ] 00:06:33.295 [2024-10-09 00:14:03.869425] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.554 [2024-10-09 00:14:03.962279] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.122 00:14:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:34.122 00:14:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:06:34.122 00:14:04 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:34.122 00:14:04 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:34.122 00:14:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:06:34.122 00:14:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:34.122 00:14:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:34.122 00:14:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:34.122 00:14:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:34.122 00:14:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:34.122 00:14:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:34.122 00:14:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:34.122 00:14:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:34.122 00:14:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:34.122 00:14:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:34.122 [2024-10-09 00:14:04.693313] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:06:34.122 [2024-10-09 00:14:04.693379] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3875244 ] 00:06:34.382 [2024-10-09 00:14:04.765171] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.382 [2024-10-09 00:14:04.849323] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:06:34.382 [2024-10-09 00:14:04.849405] rpc.c: 181:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:34.382 [2024-10-09 00:14:04.849419] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:34.382 [2024-10-09 00:14:04.849427] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:34.382 00:14:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:06:34.382 00:14:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:34.382 00:14:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:06:34.382 00:14:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:06:34.382 00:14:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:06:34.382 00:14:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:34.382 00:14:04 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:34.382 00:14:04 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3875124 00:06:34.382 00:14:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 3875124 ']' 00:06:34.382 00:14:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 3875124 00:06:34.382 00:14:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:06:34.382 00:14:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:34.382 00:14:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3875124 00:06:34.382 00:14:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:34.382 00:14:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:34.382 00:14:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3875124' 00:06:34.382 killing process with pid 3875124 00:06:34.382 00:14:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 3875124 00:06:34.382 00:14:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 3875124 00:06:34.950 00:06:34.950 real 0m1.574s 00:06:34.950 user 0m1.807s 00:06:34.950 sys 0m0.457s 00:06:34.950 00:14:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:34.950 00:14:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:34.950 ************************************ 00:06:34.950 END TEST exit_on_failed_rpc_init 00:06:34.950 ************************************ 00:06:34.950 00:14:05 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:06:34.950 00:06:34.950 real 0m14.472s 00:06:34.950 user 0m13.883s 00:06:34.950 sys 0m1.866s 00:06:34.950 00:14:05 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:34.950 00:14:05 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.950 ************************************ 00:06:34.950 END TEST skip_rpc 00:06:34.950 ************************************ 00:06:34.950 00:14:05 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:34.950 00:14:05 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:34.950 00:14:05 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:34.950 00:14:05 -- common/autotest_common.sh@10 -- # set +x 00:06:34.950 ************************************ 00:06:34.950 START TEST rpc_client 00:06:34.950 ************************************ 00:06:34.950 00:14:05 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:34.950 * Looking for test storage... 00:06:34.950 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client 00:06:34.950 00:14:05 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:34.950 00:14:05 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:06:35.210 00:14:05 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:35.210 00:14:05 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:35.210 00:14:05 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:35.210 00:14:05 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:35.210 00:14:05 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:35.210 00:14:05 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:35.210 00:14:05 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:35.210 00:14:05 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:35.210 00:14:05 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:35.210 00:14:05 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:35.210 00:14:05 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:35.210 00:14:05 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:35.210 00:14:05 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:35.210 00:14:05 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:35.210 00:14:05 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:35.210 00:14:05 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:35.210 00:14:05 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:35.210 00:14:05 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:35.210 00:14:05 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:35.210 00:14:05 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:35.210 00:14:05 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:35.210 00:14:05 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:35.210 00:14:05 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:35.210 00:14:05 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:35.210 00:14:05 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:35.210 00:14:05 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:35.210 00:14:05 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:35.210 00:14:05 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:35.210 00:14:05 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:35.210 00:14:05 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:35.210 00:14:05 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:35.210 00:14:05 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:35.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.210 --rc genhtml_branch_coverage=1 00:06:35.210 --rc genhtml_function_coverage=1 00:06:35.210 --rc genhtml_legend=1 00:06:35.210 --rc geninfo_all_blocks=1 00:06:35.210 --rc geninfo_unexecuted_blocks=1 00:06:35.210 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:35.210 ' 00:06:35.210 00:14:05 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:35.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.210 --rc genhtml_branch_coverage=1 00:06:35.210 --rc genhtml_function_coverage=1 00:06:35.210 --rc genhtml_legend=1 00:06:35.210 --rc geninfo_all_blocks=1 00:06:35.210 --rc geninfo_unexecuted_blocks=1 00:06:35.210 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:35.210 ' 00:06:35.210 00:14:05 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:35.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.210 --rc genhtml_branch_coverage=1 00:06:35.210 --rc genhtml_function_coverage=1 00:06:35.210 --rc genhtml_legend=1 00:06:35.210 --rc geninfo_all_blocks=1 00:06:35.210 --rc geninfo_unexecuted_blocks=1 00:06:35.210 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:35.210 ' 00:06:35.210 00:14:05 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:35.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.210 --rc genhtml_branch_coverage=1 00:06:35.210 --rc genhtml_function_coverage=1 00:06:35.210 --rc genhtml_legend=1 00:06:35.210 --rc geninfo_all_blocks=1 00:06:35.210 --rc geninfo_unexecuted_blocks=1 00:06:35.210 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:35.210 ' 00:06:35.210 00:14:05 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:06:35.210 OK 00:06:35.210 00:14:05 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:35.210 00:06:35.210 real 0m0.216s 00:06:35.210 user 0m0.121s 00:06:35.210 sys 0m0.112s 00:06:35.210 00:14:05 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:35.210 00:14:05 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:35.210 ************************************ 00:06:35.210 END TEST rpc_client 00:06:35.210 ************************************ 00:06:35.210 00:14:05 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:06:35.210 00:14:05 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:35.210 00:14:05 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:35.210 00:14:05 -- common/autotest_common.sh@10 -- # set +x 00:06:35.210 ************************************ 00:06:35.210 START TEST json_config 00:06:35.210 ************************************ 00:06:35.210 00:14:05 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:06:35.470 00:14:05 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:35.470 00:14:05 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:06:35.470 00:14:05 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:35.470 00:14:05 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:35.470 00:14:05 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:35.470 00:14:05 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:35.470 00:14:05 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:35.470 00:14:05 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:35.470 00:14:05 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:35.470 00:14:05 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:35.470 00:14:05 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:35.470 00:14:05 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:35.470 00:14:05 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:35.470 00:14:05 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:35.470 00:14:05 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:35.470 00:14:05 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:35.470 00:14:05 json_config -- scripts/common.sh@345 -- # : 1 00:06:35.470 00:14:05 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:35.470 00:14:05 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:35.470 00:14:05 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:35.470 00:14:05 json_config -- scripts/common.sh@353 -- # local d=1 00:06:35.470 00:14:05 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:35.470 00:14:05 json_config -- scripts/common.sh@355 -- # echo 1 00:06:35.470 00:14:05 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:35.470 00:14:05 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:35.470 00:14:05 json_config -- scripts/common.sh@353 -- # local d=2 00:06:35.470 00:14:05 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:35.470 00:14:05 json_config -- scripts/common.sh@355 -- # echo 2 00:06:35.470 00:14:05 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:35.470 00:14:05 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:35.470 00:14:05 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:35.470 00:14:05 json_config -- scripts/common.sh@368 -- # return 0 00:06:35.470 00:14:05 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:35.470 00:14:05 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:35.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.470 --rc genhtml_branch_coverage=1 00:06:35.470 --rc genhtml_function_coverage=1 00:06:35.470 --rc genhtml_legend=1 00:06:35.470 --rc geninfo_all_blocks=1 00:06:35.470 --rc geninfo_unexecuted_blocks=1 00:06:35.470 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:35.470 ' 00:06:35.470 00:14:05 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:35.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.470 --rc genhtml_branch_coverage=1 00:06:35.470 --rc genhtml_function_coverage=1 00:06:35.470 --rc genhtml_legend=1 00:06:35.470 --rc geninfo_all_blocks=1 00:06:35.470 --rc geninfo_unexecuted_blocks=1 00:06:35.470 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:35.470 ' 00:06:35.470 00:14:05 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:35.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.470 --rc genhtml_branch_coverage=1 00:06:35.470 --rc genhtml_function_coverage=1 00:06:35.470 --rc genhtml_legend=1 00:06:35.470 --rc geninfo_all_blocks=1 00:06:35.470 --rc geninfo_unexecuted_blocks=1 00:06:35.470 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:35.470 ' 00:06:35.470 00:14:05 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:35.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.470 --rc genhtml_branch_coverage=1 00:06:35.470 --rc genhtml_function_coverage=1 00:06:35.470 --rc genhtml_legend=1 00:06:35.470 --rc geninfo_all_blocks=1 00:06:35.470 --rc geninfo_unexecuted_blocks=1 00:06:35.470 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:35.470 ' 00:06:35.470 00:14:05 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:06:35.470 00:14:05 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:35.470 00:14:05 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:35.470 00:14:05 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:35.470 00:14:05 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:35.470 00:14:05 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:35.470 00:14:05 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:35.470 00:14:05 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:35.470 00:14:05 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:35.470 00:14:05 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:35.470 00:14:05 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:35.470 00:14:05 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:35.470 00:14:05 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8023d868-666a-e711-906e-0017a4403562 00:06:35.470 00:14:05 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=8023d868-666a-e711-906e-0017a4403562 00:06:35.470 00:14:05 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:35.470 00:14:05 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:35.470 00:14:05 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:35.470 00:14:05 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:35.470 00:14:05 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:06:35.470 00:14:05 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:35.470 00:14:05 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:35.470 00:14:05 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:35.470 00:14:05 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:35.470 00:14:05 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.470 00:14:05 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.470 00:14:05 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.470 00:14:05 json_config -- paths/export.sh@5 -- # export PATH 00:06:35.470 00:14:05 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.470 00:14:05 json_config -- nvmf/common.sh@51 -- # : 0 00:06:35.470 00:14:05 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:35.470 00:14:05 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:35.470 00:14:05 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:35.470 00:14:05 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:35.470 00:14:05 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:35.470 00:14:05 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:35.470 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:35.470 00:14:05 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:35.470 00:14:05 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:35.470 00:14:05 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:35.470 00:14:05 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/common.sh 00:06:35.470 00:14:05 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:35.470 00:14:05 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:35.470 00:14:05 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:35.470 00:14:05 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:35.470 00:14:05 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:06:35.470 WARNING: No tests are enabled so not running JSON configuration tests 00:06:35.470 00:14:05 json_config -- json_config/json_config.sh@28 -- # exit 0 00:06:35.470 00:06:35.470 real 0m0.200s 00:06:35.470 user 0m0.111s 00:06:35.470 sys 0m0.098s 00:06:35.470 00:14:05 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:35.470 00:14:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:35.470 ************************************ 00:06:35.470 END TEST json_config 00:06:35.470 ************************************ 00:06:35.470 00:14:06 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:35.470 00:14:06 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:35.470 00:14:06 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:35.470 00:14:06 -- common/autotest_common.sh@10 -- # set +x 00:06:35.470 ************************************ 00:06:35.470 START TEST json_config_extra_key 00:06:35.470 ************************************ 00:06:35.470 00:14:06 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:35.730 00:14:06 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:35.730 00:14:06 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:06:35.730 00:14:06 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:35.730 00:14:06 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:35.730 00:14:06 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:35.730 00:14:06 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:35.730 00:14:06 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:35.730 00:14:06 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:35.730 00:14:06 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:35.730 00:14:06 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:35.730 00:14:06 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:35.730 00:14:06 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:35.730 00:14:06 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:35.730 00:14:06 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:35.730 00:14:06 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:35.730 00:14:06 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:35.730 00:14:06 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:35.730 00:14:06 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:35.730 00:14:06 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:35.730 00:14:06 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:35.730 00:14:06 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:35.730 00:14:06 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:35.730 00:14:06 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:35.730 00:14:06 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:35.730 00:14:06 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:35.730 00:14:06 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:35.730 00:14:06 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:35.730 00:14:06 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:35.730 00:14:06 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:35.730 00:14:06 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:35.730 00:14:06 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:35.730 00:14:06 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:35.730 00:14:06 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:35.730 00:14:06 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:35.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.730 --rc genhtml_branch_coverage=1 00:06:35.730 --rc genhtml_function_coverage=1 00:06:35.730 --rc genhtml_legend=1 00:06:35.730 --rc geninfo_all_blocks=1 00:06:35.730 --rc geninfo_unexecuted_blocks=1 00:06:35.730 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:35.730 ' 00:06:35.730 00:14:06 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:35.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.730 --rc genhtml_branch_coverage=1 00:06:35.730 --rc genhtml_function_coverage=1 00:06:35.730 --rc genhtml_legend=1 00:06:35.730 --rc geninfo_all_blocks=1 00:06:35.730 --rc geninfo_unexecuted_blocks=1 00:06:35.730 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:35.730 ' 00:06:35.730 00:14:06 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:35.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.730 --rc genhtml_branch_coverage=1 00:06:35.730 --rc genhtml_function_coverage=1 00:06:35.730 --rc genhtml_legend=1 00:06:35.730 --rc geninfo_all_blocks=1 00:06:35.730 --rc geninfo_unexecuted_blocks=1 00:06:35.730 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:35.730 ' 00:06:35.730 00:14:06 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:35.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.730 --rc genhtml_branch_coverage=1 00:06:35.730 --rc genhtml_function_coverage=1 00:06:35.730 --rc genhtml_legend=1 00:06:35.730 --rc geninfo_all_blocks=1 00:06:35.730 --rc geninfo_unexecuted_blocks=1 00:06:35.730 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:35.730 ' 00:06:35.730 00:14:06 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:06:35.730 00:14:06 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:35.730 00:14:06 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:35.730 00:14:06 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:35.730 00:14:06 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:35.730 00:14:06 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:35.730 00:14:06 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:35.730 00:14:06 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:35.730 00:14:06 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:35.730 00:14:06 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:35.730 00:14:06 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:35.730 00:14:06 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:35.730 00:14:06 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8023d868-666a-e711-906e-0017a4403562 00:06:35.730 00:14:06 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=8023d868-666a-e711-906e-0017a4403562 00:06:35.730 00:14:06 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:35.730 00:14:06 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:35.730 00:14:06 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:35.730 00:14:06 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:35.730 00:14:06 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:06:35.730 00:14:06 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:35.730 00:14:06 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:35.730 00:14:06 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:35.730 00:14:06 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:35.730 00:14:06 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.730 00:14:06 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.730 00:14:06 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.730 00:14:06 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:35.730 00:14:06 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.730 00:14:06 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:35.730 00:14:06 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:35.730 00:14:06 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:35.730 00:14:06 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:35.730 00:14:06 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:35.730 00:14:06 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:35.731 00:14:06 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:35.731 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:35.731 00:14:06 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:35.731 00:14:06 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:35.731 00:14:06 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:35.731 00:14:06 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/common.sh 00:06:35.731 00:14:06 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:35.731 00:14:06 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:35.731 00:14:06 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:35.731 00:14:06 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:35.731 00:14:06 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:35.731 00:14:06 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:35.731 00:14:06 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:35.731 00:14:06 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:35.731 00:14:06 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:35.731 00:14:06 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:35.731 INFO: launching applications... 00:06:35.731 00:14:06 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:06:35.731 00:14:06 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:35.731 00:14:06 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:35.731 00:14:06 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:35.731 00:14:06 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:35.731 00:14:06 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:35.731 00:14:06 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:35.731 00:14:06 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:35.731 00:14:06 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3875591 00:06:35.731 00:14:06 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:35.731 Waiting for target to run... 00:06:35.731 00:14:06 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3875591 /var/tmp/spdk_tgt.sock 00:06:35.731 00:14:06 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:06:35.731 00:14:06 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 3875591 ']' 00:06:35.731 00:14:06 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:35.731 00:14:06 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:35.731 00:14:06 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:35.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:35.731 00:14:06 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:35.731 00:14:06 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:35.731 [2024-10-09 00:14:06.284439] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:06:35.731 [2024-10-09 00:14:06.284513] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3875591 ] 00:06:36.300 [2024-10-09 00:14:06.733420] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.300 [2024-10-09 00:14:06.826016] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.567 00:14:07 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:36.567 00:14:07 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:06:36.567 00:14:07 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:36.567 00:06:36.567 00:14:07 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:36.567 INFO: shutting down applications... 00:06:36.567 00:14:07 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:36.567 00:14:07 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:36.567 00:14:07 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:36.567 00:14:07 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3875591 ]] 00:06:36.567 00:14:07 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3875591 00:06:36.567 00:14:07 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:36.567 00:14:07 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:36.567 00:14:07 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3875591 00:06:36.567 00:14:07 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:37.223 00:14:07 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:37.223 00:14:07 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:37.223 00:14:07 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3875591 00:06:37.223 00:14:07 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:37.223 00:14:07 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:37.223 00:14:07 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:37.223 00:14:07 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:37.223 SPDK target shutdown done 00:06:37.223 00:14:07 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:37.223 Success 00:06:37.223 00:06:37.223 real 0m1.589s 00:06:37.223 user 0m1.181s 00:06:37.223 sys 0m0.617s 00:06:37.223 00:14:07 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:37.223 00:14:07 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:37.223 ************************************ 00:06:37.223 END TEST json_config_extra_key 00:06:37.223 ************************************ 00:06:37.223 00:14:07 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:37.223 00:14:07 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:37.223 00:14:07 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:37.223 00:14:07 -- common/autotest_common.sh@10 -- # set +x 00:06:37.223 ************************************ 00:06:37.223 START TEST alias_rpc 00:06:37.223 ************************************ 00:06:37.223 00:14:07 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:37.223 * Looking for test storage... 00:06:37.223 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc 00:06:37.223 00:14:07 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:37.223 00:14:07 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:06:37.223 00:14:07 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:37.507 00:14:07 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:37.507 00:14:07 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:37.507 00:14:07 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:37.507 00:14:07 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:37.507 00:14:07 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:37.507 00:14:07 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:37.507 00:14:07 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:37.507 00:14:07 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:37.507 00:14:07 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:37.507 00:14:07 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:37.507 00:14:07 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:37.507 00:14:07 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:37.507 00:14:07 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:37.507 00:14:07 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:37.507 00:14:07 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:37.507 00:14:07 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:37.507 00:14:07 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:37.507 00:14:07 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:37.507 00:14:07 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:37.507 00:14:07 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:37.507 00:14:07 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:37.507 00:14:07 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:37.507 00:14:07 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:37.507 00:14:07 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:37.507 00:14:07 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:37.507 00:14:07 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:37.507 00:14:07 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:37.507 00:14:07 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:37.507 00:14:07 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:37.507 00:14:07 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:37.507 00:14:07 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:37.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.507 --rc genhtml_branch_coverage=1 00:06:37.507 --rc genhtml_function_coverage=1 00:06:37.507 --rc genhtml_legend=1 00:06:37.507 --rc geninfo_all_blocks=1 00:06:37.507 --rc geninfo_unexecuted_blocks=1 00:06:37.507 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:37.507 ' 00:06:37.507 00:14:07 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:37.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.507 --rc genhtml_branch_coverage=1 00:06:37.507 --rc genhtml_function_coverage=1 00:06:37.507 --rc genhtml_legend=1 00:06:37.507 --rc geninfo_all_blocks=1 00:06:37.507 --rc geninfo_unexecuted_blocks=1 00:06:37.507 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:37.507 ' 00:06:37.507 00:14:07 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:37.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.507 --rc genhtml_branch_coverage=1 00:06:37.507 --rc genhtml_function_coverage=1 00:06:37.507 --rc genhtml_legend=1 00:06:37.507 --rc geninfo_all_blocks=1 00:06:37.507 --rc geninfo_unexecuted_blocks=1 00:06:37.507 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:37.507 ' 00:06:37.507 00:14:07 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:37.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.507 --rc genhtml_branch_coverage=1 00:06:37.507 --rc genhtml_function_coverage=1 00:06:37.507 --rc genhtml_legend=1 00:06:37.507 --rc geninfo_all_blocks=1 00:06:37.507 --rc geninfo_unexecuted_blocks=1 00:06:37.507 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:37.507 ' 00:06:37.507 00:14:07 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:37.507 00:14:07 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3875966 00:06:37.507 00:14:07 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3875966 00:06:37.507 00:14:07 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:37.507 00:14:07 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 3875966 ']' 00:06:37.507 00:14:07 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.507 00:14:07 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:37.507 00:14:07 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.507 00:14:07 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:37.507 00:14:07 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:37.507 [2024-10-09 00:14:07.934951] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:06:37.507 [2024-10-09 00:14:07.935022] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3875966 ] 00:06:37.507 [2024-10-09 00:14:08.010408] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.507 [2024-10-09 00:14:08.102204] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.544 00:14:08 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:38.544 00:14:08 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:38.544 00:14:08 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:38.544 00:14:09 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3875966 00:06:38.544 00:14:09 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 3875966 ']' 00:06:38.544 00:14:09 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 3875966 00:06:38.544 00:14:09 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:06:38.544 00:14:09 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:38.544 00:14:09 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3875966 00:06:38.544 00:14:09 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:38.544 00:14:09 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:38.544 00:14:09 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3875966' 00:06:38.544 killing process with pid 3875966 00:06:38.544 00:14:09 alias_rpc -- common/autotest_common.sh@969 -- # kill 3875966 00:06:38.544 00:14:09 alias_rpc -- common/autotest_common.sh@974 -- # wait 3875966 00:06:39.111 00:06:39.111 real 0m1.728s 00:06:39.111 user 0m1.844s 00:06:39.111 sys 0m0.515s 00:06:39.111 00:14:09 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:39.111 00:14:09 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.111 ************************************ 00:06:39.111 END TEST alias_rpc 00:06:39.111 ************************************ 00:06:39.111 00:14:09 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:39.111 00:14:09 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:39.111 00:14:09 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:39.111 00:14:09 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:39.111 00:14:09 -- common/autotest_common.sh@10 -- # set +x 00:06:39.111 ************************************ 00:06:39.111 START TEST spdkcli_tcp 00:06:39.111 ************************************ 00:06:39.111 00:14:09 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:39.111 * Looking for test storage... 00:06:39.111 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli 00:06:39.111 00:14:09 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:39.111 00:14:09 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:06:39.111 00:14:09 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:39.111 00:14:09 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:39.111 00:14:09 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:39.111 00:14:09 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:39.111 00:14:09 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:39.111 00:14:09 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:39.111 00:14:09 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:39.111 00:14:09 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:39.111 00:14:09 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:39.111 00:14:09 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:39.111 00:14:09 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:39.111 00:14:09 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:39.111 00:14:09 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:39.111 00:14:09 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:39.111 00:14:09 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:39.111 00:14:09 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:39.111 00:14:09 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:39.111 00:14:09 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:39.111 00:14:09 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:39.111 00:14:09 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:39.111 00:14:09 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:39.111 00:14:09 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:39.111 00:14:09 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:39.111 00:14:09 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:39.111 00:14:09 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:39.111 00:14:09 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:39.111 00:14:09 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:39.111 00:14:09 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:39.111 00:14:09 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:39.111 00:14:09 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:39.111 00:14:09 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:39.111 00:14:09 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:39.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.111 --rc genhtml_branch_coverage=1 00:06:39.111 --rc genhtml_function_coverage=1 00:06:39.111 --rc genhtml_legend=1 00:06:39.111 --rc geninfo_all_blocks=1 00:06:39.111 --rc geninfo_unexecuted_blocks=1 00:06:39.111 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:39.111 ' 00:06:39.111 00:14:09 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:39.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.111 --rc genhtml_branch_coverage=1 00:06:39.111 --rc genhtml_function_coverage=1 00:06:39.111 --rc genhtml_legend=1 00:06:39.111 --rc geninfo_all_blocks=1 00:06:39.111 --rc geninfo_unexecuted_blocks=1 00:06:39.111 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:39.111 ' 00:06:39.111 00:14:09 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:39.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.111 --rc genhtml_branch_coverage=1 00:06:39.111 --rc genhtml_function_coverage=1 00:06:39.111 --rc genhtml_legend=1 00:06:39.111 --rc geninfo_all_blocks=1 00:06:39.111 --rc geninfo_unexecuted_blocks=1 00:06:39.111 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:39.111 ' 00:06:39.111 00:14:09 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:39.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.111 --rc genhtml_branch_coverage=1 00:06:39.111 --rc genhtml_function_coverage=1 00:06:39.111 --rc genhtml_legend=1 00:06:39.111 --rc geninfo_all_blocks=1 00:06:39.111 --rc geninfo_unexecuted_blocks=1 00:06:39.111 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:39.111 ' 00:06:39.111 00:14:09 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/common.sh 00:06:39.111 00:14:09 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:39.111 00:14:09 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/clear_config.py 00:06:39.111 00:14:09 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:39.111 00:14:09 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:39.111 00:14:09 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:39.111 00:14:09 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:39.111 00:14:09 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:39.111 00:14:09 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:39.112 00:14:09 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3876242 00:06:39.112 00:14:09 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3876242 00:06:39.112 00:14:09 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:39.112 00:14:09 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 3876242 ']' 00:06:39.112 00:14:09 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.112 00:14:09 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:39.112 00:14:09 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.112 00:14:09 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:39.112 00:14:09 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:39.370 [2024-10-09 00:14:09.765363] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:06:39.370 [2024-10-09 00:14:09.765435] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3876242 ] 00:06:39.370 [2024-10-09 00:14:09.838719] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:39.370 [2024-10-09 00:14:09.920172] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:06:39.370 [2024-10-09 00:14:09.920174] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.308 00:14:10 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:40.308 00:14:10 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:06:40.308 00:14:10 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3876403 00:06:40.308 00:14:10 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:40.308 00:14:10 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:40.308 [ 00:06:40.308 "spdk_get_version", 00:06:40.308 "rpc_get_methods", 00:06:40.308 "notify_get_notifications", 00:06:40.308 "notify_get_types", 00:06:40.308 "trace_get_info", 00:06:40.308 "trace_get_tpoint_group_mask", 00:06:40.308 "trace_disable_tpoint_group", 00:06:40.308 "trace_enable_tpoint_group", 00:06:40.308 "trace_clear_tpoint_mask", 00:06:40.308 "trace_set_tpoint_mask", 00:06:40.308 "fsdev_set_opts", 00:06:40.308 "fsdev_get_opts", 00:06:40.308 "framework_get_pci_devices", 00:06:40.308 "framework_get_config", 00:06:40.308 "framework_get_subsystems", 00:06:40.308 "vfu_tgt_set_base_path", 00:06:40.308 "keyring_get_keys", 00:06:40.308 "iobuf_get_stats", 00:06:40.308 "iobuf_set_options", 00:06:40.308 "sock_get_default_impl", 00:06:40.308 "sock_set_default_impl", 00:06:40.308 "sock_impl_set_options", 00:06:40.308 "sock_impl_get_options", 00:06:40.308 "vmd_rescan", 00:06:40.308 "vmd_remove_device", 00:06:40.308 "vmd_enable", 00:06:40.308 "accel_get_stats", 00:06:40.308 "accel_set_options", 00:06:40.308 "accel_set_driver", 00:06:40.308 "accel_crypto_key_destroy", 00:06:40.308 "accel_crypto_keys_get", 00:06:40.308 "accel_crypto_key_create", 00:06:40.308 "accel_assign_opc", 00:06:40.308 "accel_get_module_info", 00:06:40.308 "accel_get_opc_assignments", 00:06:40.308 "bdev_get_histogram", 00:06:40.308 "bdev_enable_histogram", 00:06:40.308 "bdev_set_qos_limit", 00:06:40.308 "bdev_set_qd_sampling_period", 00:06:40.308 "bdev_get_bdevs", 00:06:40.308 "bdev_reset_iostat", 00:06:40.308 "bdev_get_iostat", 00:06:40.308 "bdev_examine", 00:06:40.308 "bdev_wait_for_examine", 00:06:40.308 "bdev_set_options", 00:06:40.308 "scsi_get_devices", 00:06:40.308 "thread_set_cpumask", 00:06:40.308 "scheduler_set_options", 00:06:40.308 "framework_get_governor", 00:06:40.308 "framework_get_scheduler", 00:06:40.308 "framework_set_scheduler", 00:06:40.308 "framework_get_reactors", 00:06:40.308 "thread_get_io_channels", 00:06:40.308 "thread_get_pollers", 00:06:40.308 "thread_get_stats", 00:06:40.308 "framework_monitor_context_switch", 00:06:40.308 "spdk_kill_instance", 00:06:40.308 "log_enable_timestamps", 00:06:40.308 "log_get_flags", 00:06:40.308 "log_clear_flag", 00:06:40.308 "log_set_flag", 00:06:40.308 "log_get_level", 00:06:40.308 "log_set_level", 00:06:40.308 "log_get_print_level", 00:06:40.308 "log_set_print_level", 00:06:40.308 "framework_enable_cpumask_locks", 00:06:40.308 "framework_disable_cpumask_locks", 00:06:40.308 "framework_wait_init", 00:06:40.308 "framework_start_init", 00:06:40.308 "virtio_blk_create_transport", 00:06:40.308 "virtio_blk_get_transports", 00:06:40.308 "vhost_controller_set_coalescing", 00:06:40.308 "vhost_get_controllers", 00:06:40.308 "vhost_delete_controller", 00:06:40.308 "vhost_create_blk_controller", 00:06:40.308 "vhost_scsi_controller_remove_target", 00:06:40.308 "vhost_scsi_controller_add_target", 00:06:40.308 "vhost_start_scsi_controller", 00:06:40.308 "vhost_create_scsi_controller", 00:06:40.308 "ublk_recover_disk", 00:06:40.308 "ublk_get_disks", 00:06:40.308 "ublk_stop_disk", 00:06:40.308 "ublk_start_disk", 00:06:40.308 "ublk_destroy_target", 00:06:40.308 "ublk_create_target", 00:06:40.308 "nbd_get_disks", 00:06:40.308 "nbd_stop_disk", 00:06:40.308 "nbd_start_disk", 00:06:40.308 "env_dpdk_get_mem_stats", 00:06:40.308 "nvmf_stop_mdns_prr", 00:06:40.308 "nvmf_publish_mdns_prr", 00:06:40.308 "nvmf_subsystem_get_listeners", 00:06:40.308 "nvmf_subsystem_get_qpairs", 00:06:40.308 "nvmf_subsystem_get_controllers", 00:06:40.309 "nvmf_get_stats", 00:06:40.309 "nvmf_get_transports", 00:06:40.309 "nvmf_create_transport", 00:06:40.309 "nvmf_get_targets", 00:06:40.309 "nvmf_delete_target", 00:06:40.309 "nvmf_create_target", 00:06:40.309 "nvmf_subsystem_allow_any_host", 00:06:40.309 "nvmf_subsystem_set_keys", 00:06:40.309 "nvmf_subsystem_remove_host", 00:06:40.309 "nvmf_subsystem_add_host", 00:06:40.309 "nvmf_ns_remove_host", 00:06:40.309 "nvmf_ns_add_host", 00:06:40.309 "nvmf_subsystem_remove_ns", 00:06:40.309 "nvmf_subsystem_set_ns_ana_group", 00:06:40.309 "nvmf_subsystem_add_ns", 00:06:40.309 "nvmf_subsystem_listener_set_ana_state", 00:06:40.309 "nvmf_discovery_get_referrals", 00:06:40.309 "nvmf_discovery_remove_referral", 00:06:40.309 "nvmf_discovery_add_referral", 00:06:40.309 "nvmf_subsystem_remove_listener", 00:06:40.309 "nvmf_subsystem_add_listener", 00:06:40.309 "nvmf_delete_subsystem", 00:06:40.309 "nvmf_create_subsystem", 00:06:40.309 "nvmf_get_subsystems", 00:06:40.309 "nvmf_set_crdt", 00:06:40.309 "nvmf_set_config", 00:06:40.309 "nvmf_set_max_subsystems", 00:06:40.309 "iscsi_get_histogram", 00:06:40.309 "iscsi_enable_histogram", 00:06:40.309 "iscsi_set_options", 00:06:40.309 "iscsi_get_auth_groups", 00:06:40.309 "iscsi_auth_group_remove_secret", 00:06:40.309 "iscsi_auth_group_add_secret", 00:06:40.309 "iscsi_delete_auth_group", 00:06:40.309 "iscsi_create_auth_group", 00:06:40.309 "iscsi_set_discovery_auth", 00:06:40.309 "iscsi_get_options", 00:06:40.309 "iscsi_target_node_request_logout", 00:06:40.309 "iscsi_target_node_set_redirect", 00:06:40.309 "iscsi_target_node_set_auth", 00:06:40.309 "iscsi_target_node_add_lun", 00:06:40.309 "iscsi_get_stats", 00:06:40.309 "iscsi_get_connections", 00:06:40.309 "iscsi_portal_group_set_auth", 00:06:40.309 "iscsi_start_portal_group", 00:06:40.309 "iscsi_delete_portal_group", 00:06:40.309 "iscsi_create_portal_group", 00:06:40.309 "iscsi_get_portal_groups", 00:06:40.309 "iscsi_delete_target_node", 00:06:40.309 "iscsi_target_node_remove_pg_ig_maps", 00:06:40.309 "iscsi_target_node_add_pg_ig_maps", 00:06:40.309 "iscsi_create_target_node", 00:06:40.309 "iscsi_get_target_nodes", 00:06:40.309 "iscsi_delete_initiator_group", 00:06:40.309 "iscsi_initiator_group_remove_initiators", 00:06:40.309 "iscsi_initiator_group_add_initiators", 00:06:40.309 "iscsi_create_initiator_group", 00:06:40.309 "iscsi_get_initiator_groups", 00:06:40.309 "fsdev_aio_delete", 00:06:40.309 "fsdev_aio_create", 00:06:40.309 "keyring_linux_set_options", 00:06:40.309 "keyring_file_remove_key", 00:06:40.309 "keyring_file_add_key", 00:06:40.309 "vfu_virtio_create_fs_endpoint", 00:06:40.309 "vfu_virtio_create_scsi_endpoint", 00:06:40.309 "vfu_virtio_scsi_remove_target", 00:06:40.309 "vfu_virtio_scsi_add_target", 00:06:40.309 "vfu_virtio_create_blk_endpoint", 00:06:40.309 "vfu_virtio_delete_endpoint", 00:06:40.309 "iaa_scan_accel_module", 00:06:40.309 "dsa_scan_accel_module", 00:06:40.309 "ioat_scan_accel_module", 00:06:40.309 "accel_error_inject_error", 00:06:40.309 "bdev_iscsi_delete", 00:06:40.309 "bdev_iscsi_create", 00:06:40.309 "bdev_iscsi_set_options", 00:06:40.309 "bdev_virtio_attach_controller", 00:06:40.309 "bdev_virtio_scsi_get_devices", 00:06:40.309 "bdev_virtio_detach_controller", 00:06:40.309 "bdev_virtio_blk_set_hotplug", 00:06:40.309 "bdev_ftl_set_property", 00:06:40.309 "bdev_ftl_get_properties", 00:06:40.309 "bdev_ftl_get_stats", 00:06:40.309 "bdev_ftl_unmap", 00:06:40.309 "bdev_ftl_unload", 00:06:40.309 "bdev_ftl_delete", 00:06:40.309 "bdev_ftl_load", 00:06:40.309 "bdev_ftl_create", 00:06:40.309 "bdev_aio_delete", 00:06:40.309 "bdev_aio_rescan", 00:06:40.309 "bdev_aio_create", 00:06:40.309 "blobfs_create", 00:06:40.309 "blobfs_detect", 00:06:40.309 "blobfs_set_cache_size", 00:06:40.309 "bdev_zone_block_delete", 00:06:40.309 "bdev_zone_block_create", 00:06:40.309 "bdev_delay_delete", 00:06:40.309 "bdev_delay_create", 00:06:40.309 "bdev_delay_update_latency", 00:06:40.309 "bdev_split_delete", 00:06:40.309 "bdev_split_create", 00:06:40.309 "bdev_error_inject_error", 00:06:40.309 "bdev_error_delete", 00:06:40.309 "bdev_error_create", 00:06:40.309 "bdev_raid_set_options", 00:06:40.309 "bdev_raid_remove_base_bdev", 00:06:40.309 "bdev_raid_add_base_bdev", 00:06:40.309 "bdev_raid_delete", 00:06:40.309 "bdev_raid_create", 00:06:40.309 "bdev_raid_get_bdevs", 00:06:40.309 "bdev_lvol_set_parent_bdev", 00:06:40.309 "bdev_lvol_set_parent", 00:06:40.309 "bdev_lvol_check_shallow_copy", 00:06:40.309 "bdev_lvol_start_shallow_copy", 00:06:40.309 "bdev_lvol_grow_lvstore", 00:06:40.309 "bdev_lvol_get_lvols", 00:06:40.309 "bdev_lvol_get_lvstores", 00:06:40.309 "bdev_lvol_delete", 00:06:40.309 "bdev_lvol_set_read_only", 00:06:40.309 "bdev_lvol_resize", 00:06:40.309 "bdev_lvol_decouple_parent", 00:06:40.309 "bdev_lvol_inflate", 00:06:40.309 "bdev_lvol_rename", 00:06:40.309 "bdev_lvol_clone_bdev", 00:06:40.309 "bdev_lvol_clone", 00:06:40.309 "bdev_lvol_snapshot", 00:06:40.309 "bdev_lvol_create", 00:06:40.309 "bdev_lvol_delete_lvstore", 00:06:40.309 "bdev_lvol_rename_lvstore", 00:06:40.309 "bdev_lvol_create_lvstore", 00:06:40.309 "bdev_passthru_delete", 00:06:40.309 "bdev_passthru_create", 00:06:40.309 "bdev_nvme_cuse_unregister", 00:06:40.309 "bdev_nvme_cuse_register", 00:06:40.309 "bdev_opal_new_user", 00:06:40.309 "bdev_opal_set_lock_state", 00:06:40.309 "bdev_opal_delete", 00:06:40.309 "bdev_opal_get_info", 00:06:40.309 "bdev_opal_create", 00:06:40.309 "bdev_nvme_opal_revert", 00:06:40.309 "bdev_nvme_opal_init", 00:06:40.309 "bdev_nvme_send_cmd", 00:06:40.309 "bdev_nvme_set_keys", 00:06:40.309 "bdev_nvme_get_path_iostat", 00:06:40.309 "bdev_nvme_get_mdns_discovery_info", 00:06:40.309 "bdev_nvme_stop_mdns_discovery", 00:06:40.309 "bdev_nvme_start_mdns_discovery", 00:06:40.309 "bdev_nvme_set_multipath_policy", 00:06:40.309 "bdev_nvme_set_preferred_path", 00:06:40.309 "bdev_nvme_get_io_paths", 00:06:40.309 "bdev_nvme_remove_error_injection", 00:06:40.309 "bdev_nvme_add_error_injection", 00:06:40.309 "bdev_nvme_get_discovery_info", 00:06:40.309 "bdev_nvme_stop_discovery", 00:06:40.309 "bdev_nvme_start_discovery", 00:06:40.309 "bdev_nvme_get_controller_health_info", 00:06:40.309 "bdev_nvme_disable_controller", 00:06:40.309 "bdev_nvme_enable_controller", 00:06:40.309 "bdev_nvme_reset_controller", 00:06:40.309 "bdev_nvme_get_transport_statistics", 00:06:40.309 "bdev_nvme_apply_firmware", 00:06:40.309 "bdev_nvme_detach_controller", 00:06:40.309 "bdev_nvme_get_controllers", 00:06:40.309 "bdev_nvme_attach_controller", 00:06:40.309 "bdev_nvme_set_hotplug", 00:06:40.309 "bdev_nvme_set_options", 00:06:40.309 "bdev_null_resize", 00:06:40.309 "bdev_null_delete", 00:06:40.309 "bdev_null_create", 00:06:40.309 "bdev_malloc_delete", 00:06:40.309 "bdev_malloc_create" 00:06:40.309 ] 00:06:40.309 00:14:10 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:40.309 00:14:10 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:40.309 00:14:10 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:40.309 00:14:10 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:40.309 00:14:10 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3876242 00:06:40.309 00:14:10 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 3876242 ']' 00:06:40.309 00:14:10 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 3876242 00:06:40.309 00:14:10 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:06:40.309 00:14:10 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:40.309 00:14:10 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3876242 00:06:40.309 00:14:10 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:40.309 00:14:10 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:40.309 00:14:10 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3876242' 00:06:40.309 killing process with pid 3876242 00:06:40.309 00:14:10 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 3876242 00:06:40.309 00:14:10 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 3876242 00:06:40.876 00:06:40.877 real 0m1.745s 00:06:40.877 user 0m3.156s 00:06:40.877 sys 0m0.535s 00:06:40.877 00:14:11 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:40.877 00:14:11 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:40.877 ************************************ 00:06:40.877 END TEST spdkcli_tcp 00:06:40.877 ************************************ 00:06:40.877 00:14:11 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:40.877 00:14:11 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:40.877 00:14:11 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:40.877 00:14:11 -- common/autotest_common.sh@10 -- # set +x 00:06:40.877 ************************************ 00:06:40.877 START TEST dpdk_mem_utility 00:06:40.877 ************************************ 00:06:40.877 00:14:11 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:40.877 * Looking for test storage... 00:06:40.877 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility 00:06:40.877 00:14:11 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:40.877 00:14:11 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:06:40.877 00:14:11 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:41.136 00:14:11 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:41.136 00:14:11 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:41.136 00:14:11 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:41.136 00:14:11 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:41.136 00:14:11 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:41.136 00:14:11 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:41.136 00:14:11 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:41.136 00:14:11 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:41.136 00:14:11 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:41.136 00:14:11 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:41.136 00:14:11 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:41.136 00:14:11 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:41.136 00:14:11 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:41.136 00:14:11 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:41.136 00:14:11 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:41.136 00:14:11 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:41.136 00:14:11 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:41.136 00:14:11 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:41.136 00:14:11 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:41.136 00:14:11 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:41.136 00:14:11 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:41.136 00:14:11 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:41.136 00:14:11 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:41.136 00:14:11 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:41.136 00:14:11 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:41.136 00:14:11 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:41.136 00:14:11 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:41.136 00:14:11 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:41.136 00:14:11 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:41.136 00:14:11 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:41.136 00:14:11 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:41.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.136 --rc genhtml_branch_coverage=1 00:06:41.136 --rc genhtml_function_coverage=1 00:06:41.136 --rc genhtml_legend=1 00:06:41.136 --rc geninfo_all_blocks=1 00:06:41.136 --rc geninfo_unexecuted_blocks=1 00:06:41.136 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:41.136 ' 00:06:41.136 00:14:11 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:41.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.136 --rc genhtml_branch_coverage=1 00:06:41.136 --rc genhtml_function_coverage=1 00:06:41.136 --rc genhtml_legend=1 00:06:41.136 --rc geninfo_all_blocks=1 00:06:41.136 --rc geninfo_unexecuted_blocks=1 00:06:41.136 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:41.136 ' 00:06:41.136 00:14:11 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:41.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.136 --rc genhtml_branch_coverage=1 00:06:41.136 --rc genhtml_function_coverage=1 00:06:41.136 --rc genhtml_legend=1 00:06:41.136 --rc geninfo_all_blocks=1 00:06:41.136 --rc geninfo_unexecuted_blocks=1 00:06:41.136 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:41.136 ' 00:06:41.136 00:14:11 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:41.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.136 --rc genhtml_branch_coverage=1 00:06:41.136 --rc genhtml_function_coverage=1 00:06:41.136 --rc genhtml_legend=1 00:06:41.136 --rc geninfo_all_blocks=1 00:06:41.136 --rc geninfo_unexecuted_blocks=1 00:06:41.136 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:41.136 ' 00:06:41.136 00:14:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:41.136 00:14:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3876498 00:06:41.136 00:14:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:41.136 00:14:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3876498 00:06:41.136 00:14:11 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 3876498 ']' 00:06:41.136 00:14:11 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.136 00:14:11 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:41.136 00:14:11 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.136 00:14:11 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:41.136 00:14:11 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:41.136 [2024-10-09 00:14:11.585364] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:06:41.136 [2024-10-09 00:14:11.585434] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3876498 ] 00:06:41.136 [2024-10-09 00:14:11.658784] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.136 [2024-10-09 00:14:11.747088] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.074 00:14:12 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:42.074 00:14:12 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:06:42.075 00:14:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:42.075 00:14:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:42.075 00:14:12 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.075 00:14:12 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:42.075 { 00:06:42.075 "filename": "/tmp/spdk_mem_dump.txt" 00:06:42.075 } 00:06:42.075 00:14:12 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.075 00:14:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:42.075 DPDK memory size 860.000000 MiB in 1 heap(s) 00:06:42.075 1 heaps totaling size 860.000000 MiB 00:06:42.075 size: 860.000000 MiB heap id: 0 00:06:42.075 end heaps---------- 00:06:42.075 9 mempools totaling size 642.649841 MiB 00:06:42.075 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:42.075 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:42.075 size: 92.545471 MiB name: bdev_io_3876498 00:06:42.075 size: 51.011292 MiB name: evtpool_3876498 00:06:42.075 size: 50.003479 MiB name: msgpool_3876498 00:06:42.075 size: 36.509338 MiB name: fsdev_io_3876498 00:06:42.075 size: 21.763794 MiB name: PDU_Pool 00:06:42.075 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:42.075 size: 0.026123 MiB name: Session_Pool 00:06:42.075 end mempools------- 00:06:42.075 6 memzones totaling size 4.142822 MiB 00:06:42.075 size: 1.000366 MiB name: RG_ring_0_3876498 00:06:42.075 size: 1.000366 MiB name: RG_ring_1_3876498 00:06:42.075 size: 1.000366 MiB name: RG_ring_4_3876498 00:06:42.075 size: 1.000366 MiB name: RG_ring_5_3876498 00:06:42.075 size: 0.125366 MiB name: RG_ring_2_3876498 00:06:42.075 size: 0.015991 MiB name: RG_ring_3_3876498 00:06:42.075 end memzones------- 00:06:42.075 00:14:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:42.075 heap id: 0 total size: 860.000000 MiB number of busy elements: 44 number of free elements: 16 00:06:42.075 list of free elements. size: 13.984680 MiB 00:06:42.075 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:42.075 element at address: 0x200000800000 with size: 1.996948 MiB 00:06:42.075 element at address: 0x20001bc00000 with size: 0.999878 MiB 00:06:42.075 element at address: 0x20001be00000 with size: 0.999878 MiB 00:06:42.075 element at address: 0x200034a00000 with size: 0.994446 MiB 00:06:42.075 element at address: 0x20000b200000 with size: 0.959839 MiB 00:06:42.075 element at address: 0x200015e00000 with size: 0.954285 MiB 00:06:42.075 element at address: 0x20001c000000 with size: 0.936584 MiB 00:06:42.075 element at address: 0x200000200000 with size: 0.841614 MiB 00:06:42.075 element at address: 0x20001d800000 with size: 0.582886 MiB 00:06:42.075 element at address: 0x200003e00000 with size: 0.495422 MiB 00:06:42.075 element at address: 0x200007000000 with size: 0.490723 MiB 00:06:42.075 element at address: 0x20001c200000 with size: 0.485657 MiB 00:06:42.075 element at address: 0x200013800000 with size: 0.481934 MiB 00:06:42.075 element at address: 0x20002ac00000 with size: 0.410034 MiB 00:06:42.075 element at address: 0x200003a00000 with size: 0.355042 MiB 00:06:42.075 list of standard malloc elements. size: 199.218628 MiB 00:06:42.075 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:42.075 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:42.075 element at address: 0x20001bcfff80 with size: 1.000122 MiB 00:06:42.075 element at address: 0x20001befff80 with size: 1.000122 MiB 00:06:42.075 element at address: 0x20001c0fff80 with size: 1.000122 MiB 00:06:42.075 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:42.075 element at address: 0x20001c0eff00 with size: 0.062622 MiB 00:06:42.075 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:42.075 element at address: 0x20001c0efdc0 with size: 0.000305 MiB 00:06:42.075 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:06:42.075 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:06:42.075 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:06:42.075 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:42.075 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:42.075 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:42.075 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:42.075 element at address: 0x200003a5ae40 with size: 0.000183 MiB 00:06:42.075 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:42.075 element at address: 0x200003a5b100 with size: 0.000183 MiB 00:06:42.075 element at address: 0x200003adb3c0 with size: 0.000183 MiB 00:06:42.075 element at address: 0x200003adb5c0 with size: 0.000183 MiB 00:06:42.075 element at address: 0x200003adf880 with size: 0.000183 MiB 00:06:42.075 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:42.075 element at address: 0x200003e7ed40 with size: 0.000183 MiB 00:06:42.075 element at address: 0x200003eff000 with size: 0.000183 MiB 00:06:42.075 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:42.075 element at address: 0x20000707da00 with size: 0.000183 MiB 00:06:42.075 element at address: 0x20000707dac0 with size: 0.000183 MiB 00:06:42.075 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:42.075 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:42.075 element at address: 0x20001387b600 with size: 0.000183 MiB 00:06:42.075 element at address: 0x20001387b6c0 with size: 0.000183 MiB 00:06:42.075 element at address: 0x2000138fb980 with size: 0.000183 MiB 00:06:42.075 element at address: 0x200015ef44c0 with size: 0.000183 MiB 00:06:42.075 element at address: 0x20001c0efc40 with size: 0.000183 MiB 00:06:42.075 element at address: 0x20001c0efd00 with size: 0.000183 MiB 00:06:42.075 element at address: 0x20001c2bc740 with size: 0.000183 MiB 00:06:42.075 element at address: 0x20001d895380 with size: 0.000183 MiB 00:06:42.075 element at address: 0x20001d895440 with size: 0.000183 MiB 00:06:42.075 element at address: 0x20002ac68f80 with size: 0.000183 MiB 00:06:42.075 element at address: 0x20002ac69040 with size: 0.000183 MiB 00:06:42.075 element at address: 0x20002ac6fc40 with size: 0.000183 MiB 00:06:42.075 element at address: 0x20002ac6fe40 with size: 0.000183 MiB 00:06:42.075 element at address: 0x20002ac6ff00 with size: 0.000183 MiB 00:06:42.075 list of memzone associated elements. size: 646.796692 MiB 00:06:42.075 element at address: 0x20001d895500 with size: 211.416748 MiB 00:06:42.075 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:42.075 element at address: 0x20002ac6ffc0 with size: 157.562561 MiB 00:06:42.075 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:42.075 element at address: 0x200015ff4780 with size: 92.045044 MiB 00:06:42.075 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_3876498_0 00:06:42.075 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:42.075 associated memzone info: size: 48.002930 MiB name: MP_evtpool_3876498_0 00:06:42.075 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:42.075 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3876498_0 00:06:42.075 element at address: 0x2000139fdb80 with size: 36.008911 MiB 00:06:42.075 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_3876498_0 00:06:42.075 element at address: 0x20001c3be940 with size: 20.255554 MiB 00:06:42.075 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:42.075 element at address: 0x200034bfeb40 with size: 18.005066 MiB 00:06:42.075 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:42.075 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:42.075 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_3876498 00:06:42.075 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:42.075 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3876498 00:06:42.075 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:42.075 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3876498 00:06:42.075 element at address: 0x2000138fba40 with size: 1.008118 MiB 00:06:42.075 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:42.075 element at address: 0x20001c2bc800 with size: 1.008118 MiB 00:06:42.075 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:42.075 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:42.075 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:42.075 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:42.075 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:42.075 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:42.075 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3876498 00:06:42.075 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:42.075 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3876498 00:06:42.075 element at address: 0x200015ef4580 with size: 1.000488 MiB 00:06:42.075 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3876498 00:06:42.075 element at address: 0x200034afe940 with size: 1.000488 MiB 00:06:42.075 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3876498 00:06:42.075 element at address: 0x200003a5b1c0 with size: 0.500488 MiB 00:06:42.075 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_3876498 00:06:42.075 element at address: 0x200003e7ee00 with size: 0.500488 MiB 00:06:42.075 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3876498 00:06:42.075 element at address: 0x20001387b780 with size: 0.500488 MiB 00:06:42.075 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:42.075 element at address: 0x20000707db80 with size: 0.500488 MiB 00:06:42.075 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:42.075 element at address: 0x20001c27c540 with size: 0.250488 MiB 00:06:42.075 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:42.075 element at address: 0x200003adf940 with size: 0.125488 MiB 00:06:42.075 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3876498 00:06:42.075 element at address: 0x20000b2f5b80 with size: 0.031738 MiB 00:06:42.075 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:42.075 element at address: 0x20002ac69100 with size: 0.023743 MiB 00:06:42.075 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:42.075 element at address: 0x200003adb680 with size: 0.016113 MiB 00:06:42.075 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3876498 00:06:42.075 element at address: 0x20002ac6f240 with size: 0.002441 MiB 00:06:42.075 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:42.075 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:06:42.075 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3876498 00:06:42.075 element at address: 0x200003adb480 with size: 0.000305 MiB 00:06:42.075 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_3876498 00:06:42.075 element at address: 0x200003a5af00 with size: 0.000305 MiB 00:06:42.075 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3876498 00:06:42.076 element at address: 0x20002ac6fd00 with size: 0.000305 MiB 00:06:42.076 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:42.076 00:14:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:42.076 00:14:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3876498 00:06:42.076 00:14:12 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 3876498 ']' 00:06:42.076 00:14:12 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 3876498 00:06:42.076 00:14:12 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:06:42.076 00:14:12 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:42.076 00:14:12 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3876498 00:06:42.076 00:14:12 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:42.076 00:14:12 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:42.076 00:14:12 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3876498' 00:06:42.076 killing process with pid 3876498 00:06:42.076 00:14:12 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 3876498 00:06:42.076 00:14:12 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 3876498 00:06:42.337 00:06:42.337 real 0m1.602s 00:06:42.338 user 0m1.638s 00:06:42.338 sys 0m0.494s 00:06:42.338 00:14:12 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:42.338 00:14:12 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:42.338 ************************************ 00:06:42.338 END TEST dpdk_mem_utility 00:06:42.338 ************************************ 00:06:42.604 00:14:13 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:06:42.604 00:14:13 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:42.604 00:14:13 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:42.604 00:14:13 -- common/autotest_common.sh@10 -- # set +x 00:06:42.604 ************************************ 00:06:42.604 START TEST event 00:06:42.604 ************************************ 00:06:42.604 00:14:13 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:06:42.604 * Looking for test storage... 00:06:42.604 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:06:42.604 00:14:13 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:42.604 00:14:13 event -- common/autotest_common.sh@1681 -- # lcov --version 00:06:42.604 00:14:13 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:42.604 00:14:13 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:42.604 00:14:13 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:42.604 00:14:13 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:42.604 00:14:13 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:42.604 00:14:13 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:42.604 00:14:13 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:42.604 00:14:13 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:42.604 00:14:13 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:42.604 00:14:13 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:42.604 00:14:13 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:42.604 00:14:13 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:42.604 00:14:13 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:42.604 00:14:13 event -- scripts/common.sh@344 -- # case "$op" in 00:06:42.604 00:14:13 event -- scripts/common.sh@345 -- # : 1 00:06:42.604 00:14:13 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:42.604 00:14:13 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:42.604 00:14:13 event -- scripts/common.sh@365 -- # decimal 1 00:06:42.604 00:14:13 event -- scripts/common.sh@353 -- # local d=1 00:06:42.604 00:14:13 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:42.604 00:14:13 event -- scripts/common.sh@355 -- # echo 1 00:06:42.604 00:14:13 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:42.604 00:14:13 event -- scripts/common.sh@366 -- # decimal 2 00:06:42.604 00:14:13 event -- scripts/common.sh@353 -- # local d=2 00:06:42.604 00:14:13 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:42.604 00:14:13 event -- scripts/common.sh@355 -- # echo 2 00:06:42.863 00:14:13 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:42.863 00:14:13 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:42.863 00:14:13 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:42.863 00:14:13 event -- scripts/common.sh@368 -- # return 0 00:06:42.864 00:14:13 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:42.864 00:14:13 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:42.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.864 --rc genhtml_branch_coverage=1 00:06:42.864 --rc genhtml_function_coverage=1 00:06:42.864 --rc genhtml_legend=1 00:06:42.864 --rc geninfo_all_blocks=1 00:06:42.864 --rc geninfo_unexecuted_blocks=1 00:06:42.864 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:42.864 ' 00:06:42.864 00:14:13 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:42.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.864 --rc genhtml_branch_coverage=1 00:06:42.864 --rc genhtml_function_coverage=1 00:06:42.864 --rc genhtml_legend=1 00:06:42.864 --rc geninfo_all_blocks=1 00:06:42.864 --rc geninfo_unexecuted_blocks=1 00:06:42.864 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:42.864 ' 00:06:42.864 00:14:13 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:42.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.864 --rc genhtml_branch_coverage=1 00:06:42.864 --rc genhtml_function_coverage=1 00:06:42.864 --rc genhtml_legend=1 00:06:42.864 --rc geninfo_all_blocks=1 00:06:42.864 --rc geninfo_unexecuted_blocks=1 00:06:42.864 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:42.864 ' 00:06:42.864 00:14:13 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:42.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.864 --rc genhtml_branch_coverage=1 00:06:42.864 --rc genhtml_function_coverage=1 00:06:42.864 --rc genhtml_legend=1 00:06:42.864 --rc geninfo_all_blocks=1 00:06:42.864 --rc geninfo_unexecuted_blocks=1 00:06:42.864 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:42.864 ' 00:06:42.864 00:14:13 event -- event/event.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:42.864 00:14:13 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:42.864 00:14:13 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:42.864 00:14:13 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:06:42.864 00:14:13 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:42.864 00:14:13 event -- common/autotest_common.sh@10 -- # set +x 00:06:42.864 ************************************ 00:06:42.864 START TEST event_perf 00:06:42.864 ************************************ 00:06:42.864 00:14:13 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:42.864 Running I/O for 1 seconds...[2024-10-09 00:14:13.300695] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:06:42.864 [2024-10-09 00:14:13.300778] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3876897 ] 00:06:42.864 [2024-10-09 00:14:13.376237] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:42.864 [2024-10-09 00:14:13.462607] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:06:42.864 [2024-10-09 00:14:13.462695] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:06:42.864 [2024-10-09 00:14:13.462755] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:06:42.864 [2024-10-09 00:14:13.462756] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.239 Running I/O for 1 seconds... 00:06:44.239 lcore 0: 191260 00:06:44.239 lcore 1: 191260 00:06:44.239 lcore 2: 191259 00:06:44.239 lcore 3: 191258 00:06:44.239 done. 00:06:44.239 00:06:44.239 real 0m1.254s 00:06:44.239 user 0m4.152s 00:06:44.239 sys 0m0.098s 00:06:44.239 00:14:14 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:44.239 00:14:14 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:44.239 ************************************ 00:06:44.239 END TEST event_perf 00:06:44.239 ************************************ 00:06:44.239 00:14:14 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:44.239 00:14:14 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:44.239 00:14:14 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:44.239 00:14:14 event -- common/autotest_common.sh@10 -- # set +x 00:06:44.239 ************************************ 00:06:44.239 START TEST event_reactor 00:06:44.239 ************************************ 00:06:44.239 00:14:14 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:44.239 [2024-10-09 00:14:14.639882] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:06:44.239 [2024-10-09 00:14:14.639965] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3877095 ] 00:06:44.239 [2024-10-09 00:14:14.715348] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.239 [2024-10-09 00:14:14.795247] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.618 test_start 00:06:45.618 oneshot 00:06:45.618 tick 100 00:06:45.618 tick 100 00:06:45.618 tick 250 00:06:45.618 tick 100 00:06:45.618 tick 100 00:06:45.618 tick 100 00:06:45.618 tick 250 00:06:45.618 tick 500 00:06:45.618 tick 100 00:06:45.618 tick 100 00:06:45.618 tick 250 00:06:45.618 tick 100 00:06:45.618 tick 100 00:06:45.618 test_end 00:06:45.618 00:06:45.618 real 0m1.245s 00:06:45.618 user 0m1.149s 00:06:45.618 sys 0m0.093s 00:06:45.618 00:14:15 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:45.618 00:14:15 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:45.618 ************************************ 00:06:45.618 END TEST event_reactor 00:06:45.618 ************************************ 00:06:45.618 00:14:15 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:45.618 00:14:15 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:45.618 00:14:15 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:45.618 00:14:15 event -- common/autotest_common.sh@10 -- # set +x 00:06:45.618 ************************************ 00:06:45.618 START TEST event_reactor_perf 00:06:45.618 ************************************ 00:06:45.618 00:14:15 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:45.618 [2024-10-09 00:14:15.954391] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:06:45.618 [2024-10-09 00:14:15.954481] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3877287 ] 00:06:45.618 [2024-10-09 00:14:16.028597] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.618 [2024-10-09 00:14:16.111394] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.553 test_start 00:06:46.553 test_end 00:06:46.553 Performance: 972361 events per second 00:06:46.553 00:06:46.553 real 0m1.245s 00:06:46.553 user 0m1.152s 00:06:46.553 sys 0m0.089s 00:06:46.553 00:14:17 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:46.553 00:14:17 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:46.553 ************************************ 00:06:46.553 END TEST event_reactor_perf 00:06:46.812 ************************************ 00:06:46.812 00:14:17 event -- event/event.sh@49 -- # uname -s 00:06:46.812 00:14:17 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:46.812 00:14:17 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:46.812 00:14:17 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:46.812 00:14:17 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:46.812 00:14:17 event -- common/autotest_common.sh@10 -- # set +x 00:06:46.812 ************************************ 00:06:46.812 START TEST event_scheduler 00:06:46.812 ************************************ 00:06:46.812 00:14:17 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:46.812 * Looking for test storage... 00:06:46.812 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler 00:06:46.812 00:14:17 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:46.812 00:14:17 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:06:46.812 00:14:17 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:46.812 00:14:17 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:46.812 00:14:17 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:46.812 00:14:17 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:46.812 00:14:17 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:46.812 00:14:17 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:46.812 00:14:17 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:46.812 00:14:17 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:46.812 00:14:17 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:46.812 00:14:17 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:46.812 00:14:17 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:46.812 00:14:17 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:46.812 00:14:17 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:46.812 00:14:17 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:46.812 00:14:17 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:46.812 00:14:17 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:46.812 00:14:17 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:46.812 00:14:17 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:46.812 00:14:17 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:46.812 00:14:17 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:46.812 00:14:17 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:46.812 00:14:17 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:46.812 00:14:17 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:46.812 00:14:17 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:46.812 00:14:17 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:46.812 00:14:17 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:46.812 00:14:17 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:46.812 00:14:17 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:46.812 00:14:17 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:46.812 00:14:17 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:46.812 00:14:17 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:46.812 00:14:17 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:46.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.812 --rc genhtml_branch_coverage=1 00:06:46.812 --rc genhtml_function_coverage=1 00:06:46.812 --rc genhtml_legend=1 00:06:46.812 --rc geninfo_all_blocks=1 00:06:46.813 --rc geninfo_unexecuted_blocks=1 00:06:46.813 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:46.813 ' 00:06:46.813 00:14:17 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:46.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.813 --rc genhtml_branch_coverage=1 00:06:46.813 --rc genhtml_function_coverage=1 00:06:46.813 --rc genhtml_legend=1 00:06:46.813 --rc geninfo_all_blocks=1 00:06:46.813 --rc geninfo_unexecuted_blocks=1 00:06:46.813 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:46.813 ' 00:06:46.813 00:14:17 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:46.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.813 --rc genhtml_branch_coverage=1 00:06:46.813 --rc genhtml_function_coverage=1 00:06:46.813 --rc genhtml_legend=1 00:06:46.813 --rc geninfo_all_blocks=1 00:06:46.813 --rc geninfo_unexecuted_blocks=1 00:06:46.813 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:46.813 ' 00:06:46.813 00:14:17 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:46.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.813 --rc genhtml_branch_coverage=1 00:06:46.813 --rc genhtml_function_coverage=1 00:06:46.813 --rc genhtml_legend=1 00:06:46.813 --rc geninfo_all_blocks=1 00:06:46.813 --rc geninfo_unexecuted_blocks=1 00:06:46.813 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:46.813 ' 00:06:46.813 00:14:17 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:46.813 00:14:17 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3877533 00:06:46.813 00:14:17 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:46.813 00:14:17 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:46.813 00:14:17 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3877533 00:06:46.813 00:14:17 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 3877533 ']' 00:06:46.813 00:14:17 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.813 00:14:17 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:46.813 00:14:17 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.813 00:14:17 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:46.813 00:14:17 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:47.071 [2024-10-09 00:14:17.464751] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:06:47.071 [2024-10-09 00:14:17.464849] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3877533 ] 00:06:47.071 [2024-10-09 00:14:17.535105] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:47.071 [2024-10-09 00:14:17.617458] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.071 [2024-10-09 00:14:17.617536] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:06:47.071 [2024-10-09 00:14:17.617611] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:06:47.071 [2024-10-09 00:14:17.617613] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:06:48.006 00:14:18 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:48.006 00:14:18 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:06:48.006 00:14:18 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:48.006 00:14:18 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.006 00:14:18 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:48.006 [2024-10-09 00:14:18.328060] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:48.006 [2024-10-09 00:14:18.328084] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:48.006 [2024-10-09 00:14:18.328097] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:48.006 [2024-10-09 00:14:18.328105] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:48.006 [2024-10-09 00:14:18.328112] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:48.006 00:14:18 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.006 00:14:18 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:48.006 00:14:18 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.006 00:14:18 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:48.006 [2024-10-09 00:14:18.405066] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:48.006 00:14:18 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.006 00:14:18 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:48.006 00:14:18 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:48.006 00:14:18 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:48.006 00:14:18 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:48.006 ************************************ 00:06:48.006 START TEST scheduler_create_thread 00:06:48.006 ************************************ 00:06:48.006 00:14:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:06:48.006 00:14:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:48.006 00:14:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.006 00:14:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:48.006 2 00:06:48.006 00:14:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.006 00:14:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:48.006 00:14:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.006 00:14:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:48.006 3 00:06:48.006 00:14:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.006 00:14:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:48.006 00:14:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.006 00:14:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:48.006 4 00:06:48.006 00:14:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.006 00:14:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:48.006 00:14:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.006 00:14:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:48.006 5 00:06:48.006 00:14:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.006 00:14:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:48.006 00:14:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.006 00:14:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:48.006 6 00:06:48.006 00:14:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.006 00:14:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:48.006 00:14:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.006 00:14:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:48.006 7 00:06:48.006 00:14:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.006 00:14:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:48.006 00:14:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.006 00:14:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:48.006 8 00:06:48.006 00:14:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.006 00:14:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:48.006 00:14:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.006 00:14:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:48.006 9 00:06:48.006 00:14:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.006 00:14:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:48.006 00:14:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.006 00:14:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:48.576 10 00:06:48.576 00:14:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.576 00:14:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:48.576 00:14:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.576 00:14:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:49.954 00:14:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.954 00:14:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:49.954 00:14:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:49.954 00:14:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.954 00:14:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:50.521 00:14:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.521 00:14:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:50.521 00:14:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.521 00:14:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:51.456 00:14:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.456 00:14:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:51.456 00:14:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:51.456 00:14:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.456 00:14:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:52.392 00:14:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.392 00:06:52.392 real 0m4.224s 00:06:52.392 user 0m0.019s 00:06:52.392 sys 0m0.006s 00:06:52.392 00:14:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:52.392 00:14:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:52.392 ************************************ 00:06:52.392 END TEST scheduler_create_thread 00:06:52.392 ************************************ 00:06:52.392 00:14:22 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:52.392 00:14:22 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3877533 00:06:52.392 00:14:22 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 3877533 ']' 00:06:52.392 00:14:22 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 3877533 00:06:52.392 00:14:22 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:06:52.392 00:14:22 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:52.392 00:14:22 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3877533 00:06:52.392 00:14:22 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:52.392 00:14:22 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:52.392 00:14:22 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3877533' 00:06:52.392 killing process with pid 3877533 00:06:52.392 00:14:22 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 3877533 00:06:52.392 00:14:22 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 3877533 00:06:52.392 [2024-10-09 00:14:22.951654] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:52.651 00:06:52.651 real 0m5.930s 00:06:52.651 user 0m13.227s 00:06:52.651 sys 0m0.449s 00:06:52.652 00:14:23 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:52.652 00:14:23 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:52.652 ************************************ 00:06:52.652 END TEST event_scheduler 00:06:52.652 ************************************ 00:06:52.652 00:14:23 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:52.652 00:14:23 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:52.652 00:14:23 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:52.652 00:14:23 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:52.652 00:14:23 event -- common/autotest_common.sh@10 -- # set +x 00:06:52.652 ************************************ 00:06:52.652 START TEST app_repeat 00:06:52.652 ************************************ 00:06:52.652 00:14:23 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:06:52.652 00:14:23 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.652 00:14:23 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.652 00:14:23 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:52.652 00:14:23 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:52.652 00:14:23 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:52.652 00:14:23 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:52.652 00:14:23 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:52.911 00:14:23 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3878295 00:06:52.911 00:14:23 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:52.911 00:14:23 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:52.911 00:14:23 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3878295' 00:06:52.911 Process app_repeat pid: 3878295 00:06:52.911 00:14:23 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:52.911 00:14:23 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:52.911 spdk_app_start Round 0 00:06:52.911 00:14:23 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3878295 /var/tmp/spdk-nbd.sock 00:06:52.911 00:14:23 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 3878295 ']' 00:06:52.911 00:14:23 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:52.911 00:14:23 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:52.911 00:14:23 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:52.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:52.911 00:14:23 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:52.911 00:14:23 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:52.911 [2024-10-09 00:14:23.310058] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:06:52.911 [2024-10-09 00:14:23.310154] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3878295 ] 00:06:52.911 [2024-10-09 00:14:23.386126] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:52.911 [2024-10-09 00:14:23.472504] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:06:52.911 [2024-10-09 00:14:23.472506] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.853 00:14:24 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:53.853 00:14:24 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:53.853 00:14:24 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:53.853 Malloc0 00:06:53.853 00:14:24 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:54.111 Malloc1 00:06:54.111 00:14:24 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:54.111 00:14:24 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:54.111 00:14:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:54.111 00:14:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:54.111 00:14:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:54.111 00:14:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:54.111 00:14:24 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:54.111 00:14:24 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:54.111 00:14:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:54.111 00:14:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:54.111 00:14:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:54.111 00:14:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:54.111 00:14:24 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:54.111 00:14:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:54.111 00:14:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:54.111 00:14:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:54.370 /dev/nbd0 00:06:54.370 00:14:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:54.370 00:14:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:54.370 00:14:24 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:54.370 00:14:24 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:54.370 00:14:24 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:54.370 00:14:24 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:54.370 00:14:24 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:54.370 00:14:24 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:54.370 00:14:24 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:54.370 00:14:24 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:54.370 00:14:24 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:54.370 1+0 records in 00:06:54.370 1+0 records out 00:06:54.370 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000250793 s, 16.3 MB/s 00:06:54.370 00:14:24 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:54.370 00:14:24 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:54.370 00:14:24 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:54.370 00:14:24 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:54.370 00:14:24 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:54.370 00:14:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:54.370 00:14:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:54.370 00:14:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:54.629 /dev/nbd1 00:06:54.629 00:14:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:54.629 00:14:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:54.629 00:14:25 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:54.629 00:14:25 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:54.629 00:14:25 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:54.629 00:14:25 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:54.629 00:14:25 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:54.629 00:14:25 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:54.629 00:14:25 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:54.629 00:14:25 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:54.629 00:14:25 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:54.629 1+0 records in 00:06:54.629 1+0 records out 00:06:54.629 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000237839 s, 17.2 MB/s 00:06:54.629 00:14:25 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:54.629 00:14:25 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:54.629 00:14:25 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:54.629 00:14:25 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:54.629 00:14:25 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:54.629 00:14:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:54.629 00:14:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:54.629 00:14:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:54.629 00:14:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:54.629 00:14:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:54.888 00:14:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:54.888 { 00:06:54.888 "nbd_device": "/dev/nbd0", 00:06:54.888 "bdev_name": "Malloc0" 00:06:54.888 }, 00:06:54.888 { 00:06:54.888 "nbd_device": "/dev/nbd1", 00:06:54.888 "bdev_name": "Malloc1" 00:06:54.888 } 00:06:54.888 ]' 00:06:54.888 00:14:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:54.888 { 00:06:54.888 "nbd_device": "/dev/nbd0", 00:06:54.888 "bdev_name": "Malloc0" 00:06:54.888 }, 00:06:54.888 { 00:06:54.888 "nbd_device": "/dev/nbd1", 00:06:54.888 "bdev_name": "Malloc1" 00:06:54.888 } 00:06:54.888 ]' 00:06:54.888 00:14:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:54.888 00:14:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:54.888 /dev/nbd1' 00:06:54.888 00:14:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:54.888 00:14:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:54.888 /dev/nbd1' 00:06:54.888 00:14:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:54.888 00:14:25 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:54.888 00:14:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:54.888 00:14:25 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:54.888 00:14:25 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:54.888 00:14:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:54.888 00:14:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:54.888 00:14:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:54.888 00:14:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:54.888 00:14:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:54.888 00:14:25 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:54.888 256+0 records in 00:06:54.888 256+0 records out 00:06:54.888 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0104786 s, 100 MB/s 00:06:54.888 00:14:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:54.888 00:14:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:54.888 256+0 records in 00:06:54.888 256+0 records out 00:06:54.888 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0199675 s, 52.5 MB/s 00:06:54.888 00:14:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:54.888 00:14:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:54.888 256+0 records in 00:06:54.888 256+0 records out 00:06:54.888 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0215151 s, 48.7 MB/s 00:06:54.888 00:14:25 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:54.888 00:14:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:54.888 00:14:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:54.888 00:14:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:54.888 00:14:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:54.888 00:14:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:54.888 00:14:25 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:54.888 00:14:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:54.888 00:14:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:54.888 00:14:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:54.888 00:14:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:54.888 00:14:25 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:54.888 00:14:25 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:54.889 00:14:25 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:54.889 00:14:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:54.889 00:14:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:54.889 00:14:25 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:54.889 00:14:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:54.889 00:14:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:55.148 00:14:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:55.148 00:14:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:55.148 00:14:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:55.148 00:14:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:55.148 00:14:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:55.148 00:14:25 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:55.148 00:14:25 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:55.148 00:14:25 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:55.148 00:14:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:55.148 00:14:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:55.406 00:14:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:55.406 00:14:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:55.406 00:14:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:55.406 00:14:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:55.406 00:14:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:55.406 00:14:25 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:55.406 00:14:25 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:55.406 00:14:25 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:55.406 00:14:25 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:55.406 00:14:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:55.406 00:14:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:55.665 00:14:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:55.665 00:14:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:55.665 00:14:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:55.665 00:14:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:55.665 00:14:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:55.665 00:14:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:55.665 00:14:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:55.665 00:14:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:55.665 00:14:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:55.665 00:14:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:55.665 00:14:26 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:55.665 00:14:26 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:55.665 00:14:26 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:55.924 00:14:26 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:55.924 [2024-10-09 00:14:26.526652] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:56.183 [2024-10-09 00:14:26.609005] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:06:56.183 [2024-10-09 00:14:26.609005] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.183 [2024-10-09 00:14:26.650490] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:56.183 [2024-10-09 00:14:26.650532] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:58.715 00:14:29 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:58.715 00:14:29 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:58.715 spdk_app_start Round 1 00:06:58.715 00:14:29 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3878295 /var/tmp/spdk-nbd.sock 00:06:58.715 00:14:29 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 3878295 ']' 00:06:58.715 00:14:29 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:58.715 00:14:29 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:58.715 00:14:29 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:58.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:58.715 00:14:29 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:58.715 00:14:29 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:58.973 00:14:29 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:58.973 00:14:29 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:58.973 00:14:29 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:59.232 Malloc0 00:06:59.232 00:14:29 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:59.491 Malloc1 00:06:59.491 00:14:29 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:59.491 00:14:29 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:59.491 00:14:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:59.491 00:14:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:59.491 00:14:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:59.491 00:14:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:59.491 00:14:29 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:59.491 00:14:29 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:59.491 00:14:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:59.491 00:14:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:59.491 00:14:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:59.491 00:14:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:59.491 00:14:29 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:59.491 00:14:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:59.491 00:14:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:59.491 00:14:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:59.750 /dev/nbd0 00:06:59.750 00:14:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:59.750 00:14:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:59.750 00:14:30 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:59.750 00:14:30 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:59.750 00:14:30 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:59.750 00:14:30 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:59.750 00:14:30 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:59.750 00:14:30 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:59.750 00:14:30 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:59.750 00:14:30 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:59.750 00:14:30 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:59.750 1+0 records in 00:06:59.750 1+0 records out 00:06:59.750 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000110709 s, 37.0 MB/s 00:06:59.750 00:14:30 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:59.750 00:14:30 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:59.750 00:14:30 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:59.750 00:14:30 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:59.750 00:14:30 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:59.750 00:14:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:59.750 00:14:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:59.750 00:14:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:00.009 /dev/nbd1 00:07:00.009 00:14:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:00.009 00:14:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:00.009 00:14:30 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:00.009 00:14:30 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:00.009 00:14:30 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:00.009 00:14:30 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:00.009 00:14:30 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:00.009 00:14:30 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:00.009 00:14:30 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:00.009 00:14:30 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:00.009 00:14:30 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:00.009 1+0 records in 00:07:00.009 1+0 records out 00:07:00.009 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000252051 s, 16.3 MB/s 00:07:00.009 00:14:30 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:07:00.009 00:14:30 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:00.009 00:14:30 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:07:00.009 00:14:30 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:00.009 00:14:30 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:00.009 00:14:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:00.009 00:14:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:00.009 00:14:30 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:00.009 00:14:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:00.009 00:14:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:00.010 00:14:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:00.010 { 00:07:00.010 "nbd_device": "/dev/nbd0", 00:07:00.010 "bdev_name": "Malloc0" 00:07:00.010 }, 00:07:00.010 { 00:07:00.010 "nbd_device": "/dev/nbd1", 00:07:00.010 "bdev_name": "Malloc1" 00:07:00.010 } 00:07:00.010 ]' 00:07:00.010 00:14:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:00.010 { 00:07:00.010 "nbd_device": "/dev/nbd0", 00:07:00.010 "bdev_name": "Malloc0" 00:07:00.010 }, 00:07:00.010 { 00:07:00.010 "nbd_device": "/dev/nbd1", 00:07:00.010 "bdev_name": "Malloc1" 00:07:00.010 } 00:07:00.010 ]' 00:07:00.010 00:14:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:00.269 00:14:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:00.269 /dev/nbd1' 00:07:00.269 00:14:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:00.269 /dev/nbd1' 00:07:00.269 00:14:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:00.269 00:14:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:00.269 00:14:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:00.269 00:14:30 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:00.269 00:14:30 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:00.269 00:14:30 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:00.269 00:14:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:00.269 00:14:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:00.269 00:14:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:00.269 00:14:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:07:00.269 00:14:30 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:00.269 00:14:30 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:00.269 256+0 records in 00:07:00.269 256+0 records out 00:07:00.269 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0115362 s, 90.9 MB/s 00:07:00.269 00:14:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:00.269 00:14:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:00.269 256+0 records in 00:07:00.269 256+0 records out 00:07:00.269 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0200564 s, 52.3 MB/s 00:07:00.269 00:14:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:00.269 00:14:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:00.269 256+0 records in 00:07:00.269 256+0 records out 00:07:00.269 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0218277 s, 48.0 MB/s 00:07:00.269 00:14:30 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:00.269 00:14:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:00.269 00:14:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:00.269 00:14:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:00.269 00:14:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:07:00.269 00:14:30 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:00.269 00:14:30 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:00.269 00:14:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:00.269 00:14:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:00.269 00:14:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:00.269 00:14:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:00.269 00:14:30 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:07:00.269 00:14:30 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:00.269 00:14:30 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:00.269 00:14:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:00.269 00:14:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:00.269 00:14:30 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:00.269 00:14:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:00.269 00:14:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:00.534 00:14:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:00.534 00:14:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:00.534 00:14:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:00.534 00:14:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:00.534 00:14:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:00.534 00:14:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:00.534 00:14:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:00.534 00:14:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:00.534 00:14:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:00.534 00:14:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:00.793 00:14:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:00.793 00:14:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:00.793 00:14:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:00.793 00:14:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:00.793 00:14:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:00.793 00:14:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:00.793 00:14:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:00.793 00:14:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:00.793 00:14:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:00.793 00:14:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:00.793 00:14:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:00.793 00:14:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:00.793 00:14:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:00.793 00:14:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:01.053 00:14:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:01.053 00:14:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:01.053 00:14:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:01.053 00:14:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:01.053 00:14:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:01.053 00:14:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:01.053 00:14:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:01.053 00:14:31 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:01.053 00:14:31 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:01.053 00:14:31 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:01.053 00:14:31 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:01.312 [2024-10-09 00:14:31.852987] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:01.312 [2024-10-09 00:14:31.935017] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:07:01.312 [2024-10-09 00:14:31.935018] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.571 [2024-10-09 00:14:31.983607] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:01.571 [2024-10-09 00:14:31.983652] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:04.103 00:14:34 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:04.103 00:14:34 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:04.103 spdk_app_start Round 2 00:07:04.103 00:14:34 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3878295 /var/tmp/spdk-nbd.sock 00:07:04.103 00:14:34 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 3878295 ']' 00:07:04.103 00:14:34 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:04.103 00:14:34 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:04.103 00:14:34 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:04.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:04.103 00:14:34 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:04.103 00:14:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:04.361 00:14:34 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:04.361 00:14:34 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:04.361 00:14:34 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:04.620 Malloc0 00:07:04.620 00:14:35 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:04.620 Malloc1 00:07:04.879 00:14:35 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:04.879 00:14:35 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.879 00:14:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:04.879 00:14:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:04.879 00:14:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:04.879 00:14:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:04.879 00:14:35 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:04.879 00:14:35 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.879 00:14:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:04.879 00:14:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:04.879 00:14:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:04.879 00:14:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:04.879 00:14:35 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:04.879 00:14:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:04.879 00:14:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:04.879 00:14:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:04.879 /dev/nbd0 00:07:04.879 00:14:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:04.879 00:14:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:04.879 00:14:35 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:04.879 00:14:35 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:04.879 00:14:35 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:04.879 00:14:35 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:04.879 00:14:35 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:04.879 00:14:35 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:04.879 00:14:35 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:04.879 00:14:35 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:04.879 00:14:35 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:04.879 1+0 records in 00:07:04.879 1+0 records out 00:07:04.879 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000269052 s, 15.2 MB/s 00:07:04.879 00:14:35 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:07:04.879 00:14:35 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:04.879 00:14:35 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:07:04.879 00:14:35 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:04.879 00:14:35 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:04.879 00:14:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:04.879 00:14:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:04.879 00:14:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:05.138 /dev/nbd1 00:07:05.139 00:14:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:05.139 00:14:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:05.139 00:14:35 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:05.139 00:14:35 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:05.139 00:14:35 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:05.139 00:14:35 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:05.139 00:14:35 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:05.139 00:14:35 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:05.139 00:14:35 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:05.139 00:14:35 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:05.139 00:14:35 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:05.139 1+0 records in 00:07:05.139 1+0 records out 00:07:05.139 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000156555 s, 26.2 MB/s 00:07:05.139 00:14:35 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:07:05.139 00:14:35 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:05.139 00:14:35 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:07:05.139 00:14:35 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:05.139 00:14:35 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:05.139 00:14:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:05.139 00:14:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:05.139 00:14:35 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:05.139 00:14:35 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:05.139 00:14:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:05.398 00:14:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:05.398 { 00:07:05.398 "nbd_device": "/dev/nbd0", 00:07:05.398 "bdev_name": "Malloc0" 00:07:05.398 }, 00:07:05.398 { 00:07:05.398 "nbd_device": "/dev/nbd1", 00:07:05.398 "bdev_name": "Malloc1" 00:07:05.398 } 00:07:05.398 ]' 00:07:05.398 00:14:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:05.398 { 00:07:05.398 "nbd_device": "/dev/nbd0", 00:07:05.398 "bdev_name": "Malloc0" 00:07:05.398 }, 00:07:05.398 { 00:07:05.398 "nbd_device": "/dev/nbd1", 00:07:05.398 "bdev_name": "Malloc1" 00:07:05.398 } 00:07:05.398 ]' 00:07:05.398 00:14:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:05.398 00:14:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:05.398 /dev/nbd1' 00:07:05.398 00:14:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:05.398 /dev/nbd1' 00:07:05.398 00:14:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:05.398 00:14:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:05.398 00:14:35 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:05.398 00:14:35 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:05.398 00:14:35 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:05.398 00:14:35 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:05.398 00:14:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:05.398 00:14:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:05.398 00:14:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:05.398 00:14:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:07:05.398 00:14:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:05.398 00:14:36 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:05.398 256+0 records in 00:07:05.398 256+0 records out 00:07:05.398 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00525827 s, 199 MB/s 00:07:05.398 00:14:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:05.398 00:14:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:05.657 256+0 records in 00:07:05.657 256+0 records out 00:07:05.657 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.020266 s, 51.7 MB/s 00:07:05.657 00:14:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:05.657 00:14:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:05.657 256+0 records in 00:07:05.657 256+0 records out 00:07:05.657 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0219348 s, 47.8 MB/s 00:07:05.657 00:14:36 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:05.657 00:14:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:05.657 00:14:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:05.657 00:14:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:05.657 00:14:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:07:05.657 00:14:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:05.657 00:14:36 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:05.657 00:14:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:05.657 00:14:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:05.657 00:14:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:05.657 00:14:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:05.657 00:14:36 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:07:05.657 00:14:36 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:05.657 00:14:36 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:05.657 00:14:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:05.657 00:14:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:05.657 00:14:36 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:05.657 00:14:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:05.657 00:14:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:05.657 00:14:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:05.916 00:14:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:05.916 00:14:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:05.916 00:14:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:05.916 00:14:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:05.916 00:14:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:05.916 00:14:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:05.916 00:14:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:05.916 00:14:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:05.916 00:14:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:05.916 00:14:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:05.916 00:14:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:05.916 00:14:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:05.916 00:14:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:05.916 00:14:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:05.916 00:14:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:05.916 00:14:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:05.916 00:14:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:05.916 00:14:36 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:05.916 00:14:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:05.916 00:14:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:06.175 00:14:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:06.175 00:14:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:06.175 00:14:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:06.175 00:14:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:06.175 00:14:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:06.175 00:14:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:06.175 00:14:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:06.175 00:14:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:06.175 00:14:36 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:06.175 00:14:36 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:06.175 00:14:36 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:06.175 00:14:36 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:06.175 00:14:36 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:06.434 00:14:36 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:06.692 [2024-10-09 00:14:37.159915] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:06.692 [2024-10-09 00:14:37.241830] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:07:06.692 [2024-10-09 00:14:37.241832] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.692 [2024-10-09 00:14:37.289201] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:06.692 [2024-10-09 00:14:37.289243] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:09.994 00:14:39 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3878295 /var/tmp/spdk-nbd.sock 00:07:09.994 00:14:39 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 3878295 ']' 00:07:09.994 00:14:39 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:09.994 00:14:39 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:09.994 00:14:39 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:09.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:09.994 00:14:39 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:09.995 00:14:39 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:09.995 00:14:40 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:09.995 00:14:40 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:09.995 00:14:40 event.app_repeat -- event/event.sh@39 -- # killprocess 3878295 00:07:09.995 00:14:40 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 3878295 ']' 00:07:09.995 00:14:40 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 3878295 00:07:09.995 00:14:40 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:07:09.995 00:14:40 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:09.995 00:14:40 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3878295 00:07:09.995 00:14:40 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:09.995 00:14:40 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:09.995 00:14:40 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3878295' 00:07:09.995 killing process with pid 3878295 00:07:09.995 00:14:40 event.app_repeat -- common/autotest_common.sh@969 -- # kill 3878295 00:07:09.995 00:14:40 event.app_repeat -- common/autotest_common.sh@974 -- # wait 3878295 00:07:09.995 spdk_app_start is called in Round 0. 00:07:09.995 Shutdown signal received, stop current app iteration 00:07:09.995 Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 reinitialization... 00:07:09.995 spdk_app_start is called in Round 1. 00:07:09.995 Shutdown signal received, stop current app iteration 00:07:09.995 Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 reinitialization... 00:07:09.995 spdk_app_start is called in Round 2. 00:07:09.995 Shutdown signal received, stop current app iteration 00:07:09.995 Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 reinitialization... 00:07:09.995 spdk_app_start is called in Round 3. 00:07:09.995 Shutdown signal received, stop current app iteration 00:07:09.995 00:14:40 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:09.995 00:14:40 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:09.995 00:07:09.995 real 0m17.114s 00:07:09.995 user 0m36.526s 00:07:09.995 sys 0m3.347s 00:07:09.995 00:14:40 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:09.995 00:14:40 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:09.995 ************************************ 00:07:09.995 END TEST app_repeat 00:07:09.995 ************************************ 00:07:09.995 00:14:40 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:09.995 00:14:40 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:09.995 00:14:40 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:09.995 00:14:40 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:09.995 00:14:40 event -- common/autotest_common.sh@10 -- # set +x 00:07:09.995 ************************************ 00:07:09.995 START TEST cpu_locks 00:07:09.995 ************************************ 00:07:09.995 00:14:40 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:09.995 * Looking for test storage... 00:07:09.995 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:07:09.995 00:14:40 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:09.995 00:14:40 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:07:09.995 00:14:40 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:10.356 00:14:40 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:10.356 00:14:40 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:10.356 00:14:40 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:10.356 00:14:40 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:10.356 00:14:40 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:07:10.356 00:14:40 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:07:10.356 00:14:40 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:07:10.356 00:14:40 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:07:10.356 00:14:40 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:07:10.356 00:14:40 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:07:10.356 00:14:40 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:07:10.356 00:14:40 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:10.356 00:14:40 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:07:10.356 00:14:40 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:07:10.356 00:14:40 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:10.356 00:14:40 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:10.356 00:14:40 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:07:10.356 00:14:40 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:07:10.356 00:14:40 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:10.356 00:14:40 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:07:10.356 00:14:40 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:07:10.356 00:14:40 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:07:10.356 00:14:40 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:07:10.356 00:14:40 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:10.356 00:14:40 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:07:10.356 00:14:40 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:07:10.356 00:14:40 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:10.356 00:14:40 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:10.356 00:14:40 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:07:10.356 00:14:40 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:10.356 00:14:40 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:10.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.356 --rc genhtml_branch_coverage=1 00:07:10.356 --rc genhtml_function_coverage=1 00:07:10.356 --rc genhtml_legend=1 00:07:10.356 --rc geninfo_all_blocks=1 00:07:10.356 --rc geninfo_unexecuted_blocks=1 00:07:10.356 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:10.356 ' 00:07:10.356 00:14:40 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:10.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.356 --rc genhtml_branch_coverage=1 00:07:10.356 --rc genhtml_function_coverage=1 00:07:10.356 --rc genhtml_legend=1 00:07:10.356 --rc geninfo_all_blocks=1 00:07:10.356 --rc geninfo_unexecuted_blocks=1 00:07:10.356 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:10.356 ' 00:07:10.356 00:14:40 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:10.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.356 --rc genhtml_branch_coverage=1 00:07:10.356 --rc genhtml_function_coverage=1 00:07:10.356 --rc genhtml_legend=1 00:07:10.356 --rc geninfo_all_blocks=1 00:07:10.356 --rc geninfo_unexecuted_blocks=1 00:07:10.356 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:10.356 ' 00:07:10.356 00:14:40 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:10.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.357 --rc genhtml_branch_coverage=1 00:07:10.357 --rc genhtml_function_coverage=1 00:07:10.357 --rc genhtml_legend=1 00:07:10.357 --rc geninfo_all_blocks=1 00:07:10.357 --rc geninfo_unexecuted_blocks=1 00:07:10.357 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:10.357 ' 00:07:10.357 00:14:40 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:10.357 00:14:40 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:10.357 00:14:40 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:10.357 00:14:40 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:10.357 00:14:40 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:10.357 00:14:40 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:10.357 00:14:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:10.357 ************************************ 00:07:10.357 START TEST default_locks 00:07:10.357 ************************************ 00:07:10.357 00:14:40 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:07:10.357 00:14:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3880807 00:07:10.357 00:14:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3880807 00:07:10.357 00:14:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:10.357 00:14:40 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 3880807 ']' 00:07:10.357 00:14:40 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.357 00:14:40 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:10.357 00:14:40 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.357 00:14:40 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:10.357 00:14:40 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:10.357 [2024-10-09 00:14:40.715713] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:07:10.357 [2024-10-09 00:14:40.715776] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3880807 ] 00:07:10.357 [2024-10-09 00:14:40.790037] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.357 [2024-10-09 00:14:40.882534] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.956 00:14:41 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:10.956 00:14:41 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:07:10.956 00:14:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3880807 00:07:10.956 00:14:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3880807 00:07:10.956 00:14:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:11.523 lslocks: write error 00:07:11.523 00:14:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3880807 00:07:11.523 00:14:41 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 3880807 ']' 00:07:11.523 00:14:41 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 3880807 00:07:11.523 00:14:41 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:07:11.523 00:14:41 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:11.523 00:14:41 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3880807 00:07:11.523 00:14:41 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:11.523 00:14:41 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:11.523 00:14:41 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3880807' 00:07:11.523 killing process with pid 3880807 00:07:11.523 00:14:41 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 3880807 00:07:11.523 00:14:41 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 3880807 00:07:11.782 00:14:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3880807 00:07:11.782 00:14:42 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:07:11.782 00:14:42 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 3880807 00:07:11.782 00:14:42 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:11.782 00:14:42 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:11.782 00:14:42 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:11.782 00:14:42 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:11.782 00:14:42 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 3880807 00:07:11.782 00:14:42 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 3880807 ']' 00:07:11.782 00:14:42 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.782 00:14:42 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:11.782 00:14:42 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.782 00:14:42 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:11.782 00:14:42 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:11.782 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (3880807) - No such process 00:07:11.782 ERROR: process (pid: 3880807) is no longer running 00:07:11.782 00:14:42 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:11.782 00:14:42 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:07:11.782 00:14:42 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:07:11.782 00:14:42 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:11.782 00:14:42 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:11.782 00:14:42 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:11.782 00:14:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:11.782 00:14:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:11.782 00:14:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:11.782 00:14:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:11.782 00:07:11.782 real 0m1.671s 00:07:11.782 user 0m1.717s 00:07:11.782 sys 0m0.605s 00:07:11.782 00:14:42 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:11.782 00:14:42 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:11.782 ************************************ 00:07:11.782 END TEST default_locks 00:07:11.782 ************************************ 00:07:11.782 00:14:42 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:11.782 00:14:42 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:11.782 00:14:42 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:11.782 00:14:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:12.041 ************************************ 00:07:12.041 START TEST default_locks_via_rpc 00:07:12.041 ************************************ 00:07:12.041 00:14:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:07:12.041 00:14:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3881023 00:07:12.041 00:14:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3881023 00:07:12.041 00:14:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:12.041 00:14:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 3881023 ']' 00:07:12.041 00:14:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:12.041 00:14:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:12.041 00:14:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:12.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:12.041 00:14:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:12.041 00:14:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:12.041 [2024-10-09 00:14:42.461096] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:07:12.041 [2024-10-09 00:14:42.461156] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3881023 ] 00:07:12.041 [2024-10-09 00:14:42.534311] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.041 [2024-10-09 00:14:42.623619] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.991 00:14:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:12.991 00:14:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:12.991 00:14:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:12.991 00:14:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.991 00:14:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:12.991 00:14:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.991 00:14:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:12.991 00:14:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:12.991 00:14:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:12.991 00:14:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:12.991 00:14:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:12.991 00:14:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.991 00:14:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:12.991 00:14:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.991 00:14:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3881023 00:07:12.991 00:14:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3881023 00:07:12.991 00:14:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:13.254 00:14:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3881023 00:07:13.254 00:14:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 3881023 ']' 00:07:13.254 00:14:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 3881023 00:07:13.254 00:14:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:07:13.254 00:14:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:13.254 00:14:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3881023 00:07:13.254 00:14:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:13.254 00:14:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:13.254 00:14:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3881023' 00:07:13.254 killing process with pid 3881023 00:07:13.254 00:14:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 3881023 00:07:13.254 00:14:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 3881023 00:07:13.513 00:07:13.513 real 0m1.699s 00:07:13.513 user 0m1.780s 00:07:13.513 sys 0m0.562s 00:07:13.513 00:14:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:13.513 00:14:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.513 ************************************ 00:07:13.513 END TEST default_locks_via_rpc 00:07:13.513 ************************************ 00:07:13.772 00:14:44 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:13.772 00:14:44 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:13.772 00:14:44 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:13.772 00:14:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:13.772 ************************************ 00:07:13.772 START TEST non_locking_app_on_locked_coremask 00:07:13.772 ************************************ 00:07:13.772 00:14:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:07:13.772 00:14:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3881391 00:07:13.772 00:14:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3881391 /var/tmp/spdk.sock 00:07:13.772 00:14:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:13.772 00:14:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3881391 ']' 00:07:13.772 00:14:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.772 00:14:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:13.772 00:14:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.772 00:14:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:13.772 00:14:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:13.772 [2024-10-09 00:14:44.242734] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:07:13.772 [2024-10-09 00:14:44.242825] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3881391 ] 00:07:13.772 [2024-10-09 00:14:44.315436] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.031 [2024-10-09 00:14:44.408146] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.598 00:14:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:14.598 00:14:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:14.598 00:14:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:14.598 00:14:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3881410 00:07:14.598 00:14:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3881410 /var/tmp/spdk2.sock 00:07:14.598 00:14:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3881410 ']' 00:07:14.598 00:14:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:14.598 00:14:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:14.598 00:14:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:14.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:14.598 00:14:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:14.598 00:14:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:14.598 [2024-10-09 00:14:45.100409] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:07:14.598 [2024-10-09 00:14:45.100460] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3881410 ] 00:07:14.598 [2024-10-09 00:14:45.194432] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:14.598 [2024-10-09 00:14:45.194456] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.857 [2024-10-09 00:14:45.359552] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.422 00:14:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:15.422 00:14:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:15.422 00:14:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3881391 00:07:15.422 00:14:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3881391 00:07:15.422 00:14:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:16.796 lslocks: write error 00:07:16.796 00:14:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3881391 00:07:16.796 00:14:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 3881391 ']' 00:07:16.796 00:14:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 3881391 00:07:16.796 00:14:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:16.796 00:14:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:16.796 00:14:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3881391 00:07:16.796 00:14:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:16.796 00:14:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:16.796 00:14:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3881391' 00:07:16.796 killing process with pid 3881391 00:07:16.796 00:14:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 3881391 00:07:16.796 00:14:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 3881391 00:07:17.363 00:14:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3881410 00:07:17.363 00:14:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 3881410 ']' 00:07:17.363 00:14:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 3881410 00:07:17.363 00:14:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:17.363 00:14:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:17.363 00:14:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3881410 00:07:17.622 00:14:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:17.622 00:14:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:17.622 00:14:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3881410' 00:07:17.622 killing process with pid 3881410 00:07:17.622 00:14:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 3881410 00:07:17.622 00:14:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 3881410 00:07:17.881 00:07:17.881 real 0m4.174s 00:07:17.881 user 0m4.411s 00:07:17.881 sys 0m1.361s 00:07:17.881 00:14:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:17.881 00:14:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:17.881 ************************************ 00:07:17.881 END TEST non_locking_app_on_locked_coremask 00:07:17.881 ************************************ 00:07:17.881 00:14:48 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:17.881 00:14:48 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:17.881 00:14:48 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:17.881 00:14:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:17.881 ************************************ 00:07:17.881 START TEST locking_app_on_unlocked_coremask 00:07:17.881 ************************************ 00:07:17.881 00:14:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:07:17.881 00:14:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3881962 00:07:17.881 00:14:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3881962 /var/tmp/spdk.sock 00:07:17.881 00:14:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:17.881 00:14:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3881962 ']' 00:07:17.881 00:14:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.881 00:14:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:17.881 00:14:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.881 00:14:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:17.881 00:14:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:17.881 [2024-10-09 00:14:48.492560] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:07:17.881 [2024-10-09 00:14:48.492619] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3881962 ] 00:07:18.140 [2024-10-09 00:14:48.564382] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:18.140 [2024-10-09 00:14:48.564412] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.140 [2024-10-09 00:14:48.651704] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.076 00:14:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:19.076 00:14:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:19.076 00:14:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3882006 00:07:19.076 00:14:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3882006 /var/tmp/spdk2.sock 00:07:19.076 00:14:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:19.076 00:14:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3882006 ']' 00:07:19.076 00:14:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:19.076 00:14:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:19.076 00:14:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:19.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:19.076 00:14:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:19.076 00:14:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:19.076 [2024-10-09 00:14:49.398152] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:07:19.076 [2024-10-09 00:14:49.398239] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3882006 ] 00:07:19.076 [2024-10-09 00:14:49.499681] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.076 [2024-10-09 00:14:49.676583] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.643 00:14:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:19.643 00:14:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:19.643 00:14:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3882006 00:07:19.643 00:14:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3882006 00:07:19.643 00:14:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:21.019 lslocks: write error 00:07:21.019 00:14:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3881962 00:07:21.019 00:14:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 3881962 ']' 00:07:21.019 00:14:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 3881962 00:07:21.019 00:14:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:21.019 00:14:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:21.019 00:14:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3881962 00:07:21.019 00:14:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:21.019 00:14:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:21.019 00:14:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3881962' 00:07:21.019 killing process with pid 3881962 00:07:21.019 00:14:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 3881962 00:07:21.019 00:14:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 3881962 00:07:21.587 00:14:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3882006 00:07:21.587 00:14:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 3882006 ']' 00:07:21.587 00:14:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 3882006 00:07:21.587 00:14:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:21.587 00:14:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:21.587 00:14:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3882006 00:07:21.587 00:14:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:21.587 00:14:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:21.587 00:14:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3882006' 00:07:21.587 killing process with pid 3882006 00:07:21.587 00:14:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 3882006 00:07:21.587 00:14:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 3882006 00:07:22.155 00:07:22.155 real 0m4.036s 00:07:22.155 user 0m4.326s 00:07:22.155 sys 0m1.332s 00:07:22.155 00:14:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:22.155 00:14:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:22.155 ************************************ 00:07:22.155 END TEST locking_app_on_unlocked_coremask 00:07:22.155 ************************************ 00:07:22.155 00:14:52 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:22.155 00:14:52 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:22.155 00:14:52 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:22.155 00:14:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:22.155 ************************************ 00:07:22.155 START TEST locking_app_on_locked_coremask 00:07:22.155 ************************************ 00:07:22.155 00:14:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:07:22.155 00:14:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3882526 00:07:22.155 00:14:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3882526 /var/tmp/spdk.sock 00:07:22.155 00:14:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:22.155 00:14:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3882526 ']' 00:07:22.155 00:14:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:22.156 00:14:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:22.156 00:14:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:22.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:22.156 00:14:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:22.156 00:14:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:22.156 [2024-10-09 00:14:52.611701] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:07:22.156 [2024-10-09 00:14:52.611766] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3882526 ] 00:07:22.156 [2024-10-09 00:14:52.686771] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.156 [2024-10-09 00:14:52.771821] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.091 00:14:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:23.091 00:14:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:23.091 00:14:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:23.091 00:14:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3882612 00:07:23.091 00:14:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3882612 /var/tmp/spdk2.sock 00:07:23.091 00:14:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:23.091 00:14:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 3882612 /var/tmp/spdk2.sock 00:07:23.091 00:14:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:23.091 00:14:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:23.091 00:14:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:23.091 00:14:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:23.091 00:14:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 3882612 /var/tmp/spdk2.sock 00:07:23.091 00:14:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3882612 ']' 00:07:23.091 00:14:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:23.091 00:14:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:23.091 00:14:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:23.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:23.091 00:14:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:23.091 00:14:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:23.091 [2024-10-09 00:14:53.498277] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:07:23.091 [2024-10-09 00:14:53.498337] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3882612 ] 00:07:23.091 [2024-10-09 00:14:53.598048] app.c: 780:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3882526 has claimed it. 00:07:23.091 [2024-10-09 00:14:53.598086] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:23.658 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (3882612) - No such process 00:07:23.658 ERROR: process (pid: 3882612) is no longer running 00:07:23.658 00:14:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:23.658 00:14:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:07:23.658 00:14:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:23.658 00:14:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:23.658 00:14:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:23.658 00:14:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:23.658 00:14:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3882526 00:07:23.658 00:14:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3882526 00:07:23.658 00:14:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:24.225 lslocks: write error 00:07:24.225 00:14:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3882526 00:07:24.225 00:14:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 3882526 ']' 00:07:24.225 00:14:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 3882526 00:07:24.225 00:14:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:24.225 00:14:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:24.225 00:14:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3882526 00:07:24.225 00:14:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:24.226 00:14:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:24.226 00:14:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3882526' 00:07:24.226 killing process with pid 3882526 00:07:24.226 00:14:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 3882526 00:07:24.226 00:14:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 3882526 00:07:24.484 00:07:24.484 real 0m2.493s 00:07:24.484 user 0m2.757s 00:07:24.484 sys 0m0.794s 00:07:24.484 00:14:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:24.484 00:14:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:24.484 ************************************ 00:07:24.484 END TEST locking_app_on_locked_coremask 00:07:24.484 ************************************ 00:07:24.743 00:14:55 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:24.743 00:14:55 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:24.743 00:14:55 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:24.743 00:14:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:24.743 ************************************ 00:07:24.743 START TEST locking_overlapped_coremask 00:07:24.743 ************************************ 00:07:24.743 00:14:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:07:24.743 00:14:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3882915 00:07:24.743 00:14:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3882915 /var/tmp/spdk.sock 00:07:24.743 00:14:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:07:24.743 00:14:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 3882915 ']' 00:07:24.743 00:14:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:24.743 00:14:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:24.743 00:14:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:24.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:24.743 00:14:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:24.743 00:14:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:24.743 [2024-10-09 00:14:55.187864] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:07:24.743 [2024-10-09 00:14:55.187939] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3882915 ] 00:07:24.743 [2024-10-09 00:14:55.262334] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:24.743 [2024-10-09 00:14:55.349489] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:07:24.743 [2024-10-09 00:14:55.349577] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:07:24.743 [2024-10-09 00:14:55.349580] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.678 00:14:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:25.678 00:14:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:25.678 00:14:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3882955 00:07:25.678 00:14:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3882955 /var/tmp/spdk2.sock 00:07:25.678 00:14:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:25.678 00:14:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:25.678 00:14:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 3882955 /var/tmp/spdk2.sock 00:07:25.678 00:14:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:25.678 00:14:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:25.678 00:14:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:25.678 00:14:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:25.678 00:14:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 3882955 /var/tmp/spdk2.sock 00:07:25.678 00:14:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 3882955 ']' 00:07:25.678 00:14:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:25.678 00:14:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:25.678 00:14:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:25.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:25.678 00:14:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:25.678 00:14:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:25.678 [2024-10-09 00:14:56.075433] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:07:25.678 [2024-10-09 00:14:56.075522] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3882955 ] 00:07:25.678 [2024-10-09 00:14:56.178213] app.c: 780:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3882915 has claimed it. 00:07:25.678 [2024-10-09 00:14:56.178251] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:26.245 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (3882955) - No such process 00:07:26.245 ERROR: process (pid: 3882955) is no longer running 00:07:26.245 00:14:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:26.245 00:14:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:07:26.245 00:14:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:26.245 00:14:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:26.245 00:14:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:26.245 00:14:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:26.245 00:14:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:26.245 00:14:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:26.245 00:14:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:26.245 00:14:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:26.245 00:14:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3882915 00:07:26.245 00:14:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 3882915 ']' 00:07:26.245 00:14:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 3882915 00:07:26.245 00:14:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:07:26.245 00:14:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:26.245 00:14:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3882915 00:07:26.245 00:14:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:26.245 00:14:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:26.245 00:14:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3882915' 00:07:26.245 killing process with pid 3882915 00:07:26.245 00:14:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 3882915 00:07:26.246 00:14:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 3882915 00:07:26.814 00:07:26.814 real 0m2.015s 00:07:26.814 user 0m5.693s 00:07:26.814 sys 0m0.491s 00:07:26.814 00:14:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:26.814 00:14:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:26.814 ************************************ 00:07:26.814 END TEST locking_overlapped_coremask 00:07:26.814 ************************************ 00:07:26.814 00:14:57 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:26.814 00:14:57 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:26.814 00:14:57 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:26.814 00:14:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:26.814 ************************************ 00:07:26.814 START TEST locking_overlapped_coremask_via_rpc 00:07:26.814 ************************************ 00:07:26.814 00:14:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:07:26.814 00:14:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3883140 00:07:26.814 00:14:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3883140 /var/tmp/spdk.sock 00:07:26.814 00:14:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:26.814 00:14:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 3883140 ']' 00:07:26.814 00:14:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:26.814 00:14:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:26.814 00:14:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:26.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:26.814 00:14:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:26.814 00:14:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.814 [2024-10-09 00:14:57.285601] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:07:26.814 [2024-10-09 00:14:57.285677] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3883140 ] 00:07:26.814 [2024-10-09 00:14:57.359487] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:26.814 [2024-10-09 00:14:57.359512] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:26.814 [2024-10-09 00:14:57.442322] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:07:26.814 [2024-10-09 00:14:57.442344] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:07:26.814 [2024-10-09 00:14:57.442346] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.758 00:14:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:27.758 00:14:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:27.758 00:14:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3883314 00:07:27.758 00:14:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3883314 /var/tmp/spdk2.sock 00:07:27.758 00:14:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:27.758 00:14:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 3883314 ']' 00:07:27.758 00:14:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:27.758 00:14:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:27.758 00:14:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:27.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:27.758 00:14:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:27.758 00:14:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:27.758 [2024-10-09 00:14:58.186628] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:07:27.758 [2024-10-09 00:14:58.186722] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3883314 ] 00:07:27.758 [2024-10-09 00:14:58.286502] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:27.758 [2024-10-09 00:14:58.286530] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:28.017 [2024-10-09 00:14:58.454516] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:07:28.017 [2024-10-09 00:14:58.457861] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:07:28.017 [2024-10-09 00:14:58.457863] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:07:28.584 00:14:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:28.584 00:14:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:28.584 00:14:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:28.584 00:14:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.584 00:14:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:28.584 00:14:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.584 00:14:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:28.584 00:14:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:07:28.584 00:14:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:28.584 00:14:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:28.584 00:14:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:28.584 00:14:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:28.584 00:14:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:28.584 00:14:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:28.584 00:14:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.584 00:14:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:28.584 [2024-10-09 00:14:59.102887] app.c: 780:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3883140 has claimed it. 00:07:28.584 request: 00:07:28.584 { 00:07:28.584 "method": "framework_enable_cpumask_locks", 00:07:28.584 "req_id": 1 00:07:28.584 } 00:07:28.584 Got JSON-RPC error response 00:07:28.584 response: 00:07:28.584 { 00:07:28.584 "code": -32603, 00:07:28.584 "message": "Failed to claim CPU core: 2" 00:07:28.584 } 00:07:28.584 00:14:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:28.584 00:14:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:07:28.584 00:14:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:28.584 00:14:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:28.584 00:14:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:28.584 00:14:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3883140 /var/tmp/spdk.sock 00:07:28.584 00:14:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 3883140 ']' 00:07:28.584 00:14:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:28.584 00:14:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:28.584 00:14:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:28.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:28.584 00:14:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:28.584 00:14:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:28.842 00:14:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:28.842 00:14:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:28.842 00:14:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3883314 /var/tmp/spdk2.sock 00:07:28.842 00:14:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 3883314 ']' 00:07:28.842 00:14:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:28.842 00:14:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:28.842 00:14:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:28.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:28.842 00:14:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:28.842 00:14:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.101 00:14:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:29.101 00:14:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:29.101 00:14:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:29.101 00:14:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:29.101 00:14:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:29.101 00:14:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:29.101 00:07:29.101 real 0m2.266s 00:07:29.101 user 0m0.978s 00:07:29.101 sys 0m0.216s 00:07:29.101 00:14:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:29.101 00:14:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.101 ************************************ 00:07:29.101 END TEST locking_overlapped_coremask_via_rpc 00:07:29.101 ************************************ 00:07:29.101 00:14:59 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:29.101 00:14:59 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3883140 ]] 00:07:29.101 00:14:59 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3883140 00:07:29.101 00:14:59 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 3883140 ']' 00:07:29.101 00:14:59 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 3883140 00:07:29.101 00:14:59 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:07:29.101 00:14:59 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:29.101 00:14:59 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3883140 00:07:29.101 00:14:59 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:29.101 00:14:59 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:29.101 00:14:59 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3883140' 00:07:29.101 killing process with pid 3883140 00:07:29.102 00:14:59 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 3883140 00:07:29.102 00:14:59 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 3883140 00:07:29.361 00:14:59 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3883314 ]] 00:07:29.361 00:14:59 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3883314 00:07:29.361 00:14:59 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 3883314 ']' 00:07:29.361 00:14:59 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 3883314 00:07:29.361 00:14:59 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:07:29.361 00:14:59 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:29.361 00:14:59 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3883314 00:07:29.621 00:15:00 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:07:29.621 00:15:00 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:07:29.621 00:15:00 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3883314' 00:07:29.621 killing process with pid 3883314 00:07:29.621 00:15:00 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 3883314 00:07:29.621 00:15:00 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 3883314 00:07:29.880 00:15:00 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:29.880 00:15:00 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:29.880 00:15:00 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3883140 ]] 00:07:29.880 00:15:00 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3883140 00:07:29.880 00:15:00 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 3883140 ']' 00:07:29.880 00:15:00 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 3883140 00:07:29.880 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3883140) - No such process 00:07:29.880 00:15:00 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 3883140 is not found' 00:07:29.880 Process with pid 3883140 is not found 00:07:29.880 00:15:00 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3883314 ]] 00:07:29.880 00:15:00 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3883314 00:07:29.880 00:15:00 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 3883314 ']' 00:07:29.880 00:15:00 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 3883314 00:07:29.880 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3883314) - No such process 00:07:29.880 00:15:00 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 3883314 is not found' 00:07:29.880 Process with pid 3883314 is not found 00:07:29.880 00:15:00 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:29.880 00:07:29.880 real 0m19.913s 00:07:29.880 user 0m33.231s 00:07:29.880 sys 0m6.494s 00:07:29.880 00:15:00 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:29.880 00:15:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:29.880 ************************************ 00:07:29.880 END TEST cpu_locks 00:07:29.880 ************************************ 00:07:29.880 00:07:29.880 real 0m47.375s 00:07:29.880 user 1m29.712s 00:07:29.880 sys 0m11.018s 00:07:29.880 00:15:00 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:29.880 00:15:00 event -- common/autotest_common.sh@10 -- # set +x 00:07:29.880 ************************************ 00:07:29.880 END TEST event 00:07:29.880 ************************************ 00:07:29.880 00:15:00 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:07:29.880 00:15:00 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:29.881 00:15:00 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:29.881 00:15:00 -- common/autotest_common.sh@10 -- # set +x 00:07:29.881 ************************************ 00:07:29.881 START TEST thread 00:07:29.881 ************************************ 00:07:29.881 00:15:00 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:07:30.138 * Looking for test storage... 00:07:30.138 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread 00:07:30.138 00:15:00 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:30.138 00:15:00 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:07:30.138 00:15:00 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:30.138 00:15:00 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:30.138 00:15:00 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:30.138 00:15:00 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:30.138 00:15:00 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:30.138 00:15:00 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:30.138 00:15:00 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:30.138 00:15:00 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:30.138 00:15:00 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:30.138 00:15:00 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:30.138 00:15:00 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:30.138 00:15:00 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:30.138 00:15:00 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:30.138 00:15:00 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:30.138 00:15:00 thread -- scripts/common.sh@345 -- # : 1 00:07:30.138 00:15:00 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:30.138 00:15:00 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:30.138 00:15:00 thread -- scripts/common.sh@365 -- # decimal 1 00:07:30.138 00:15:00 thread -- scripts/common.sh@353 -- # local d=1 00:07:30.139 00:15:00 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:30.139 00:15:00 thread -- scripts/common.sh@355 -- # echo 1 00:07:30.139 00:15:00 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:30.139 00:15:00 thread -- scripts/common.sh@366 -- # decimal 2 00:07:30.139 00:15:00 thread -- scripts/common.sh@353 -- # local d=2 00:07:30.139 00:15:00 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:30.139 00:15:00 thread -- scripts/common.sh@355 -- # echo 2 00:07:30.139 00:15:00 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:30.139 00:15:00 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:30.139 00:15:00 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:30.139 00:15:00 thread -- scripts/common.sh@368 -- # return 0 00:07:30.139 00:15:00 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:30.139 00:15:00 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:30.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.139 --rc genhtml_branch_coverage=1 00:07:30.139 --rc genhtml_function_coverage=1 00:07:30.139 --rc genhtml_legend=1 00:07:30.139 --rc geninfo_all_blocks=1 00:07:30.139 --rc geninfo_unexecuted_blocks=1 00:07:30.139 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:30.139 ' 00:07:30.139 00:15:00 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:30.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.139 --rc genhtml_branch_coverage=1 00:07:30.139 --rc genhtml_function_coverage=1 00:07:30.139 --rc genhtml_legend=1 00:07:30.139 --rc geninfo_all_blocks=1 00:07:30.139 --rc geninfo_unexecuted_blocks=1 00:07:30.139 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:30.139 ' 00:07:30.139 00:15:00 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:30.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.139 --rc genhtml_branch_coverage=1 00:07:30.139 --rc genhtml_function_coverage=1 00:07:30.139 --rc genhtml_legend=1 00:07:30.139 --rc geninfo_all_blocks=1 00:07:30.139 --rc geninfo_unexecuted_blocks=1 00:07:30.139 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:30.139 ' 00:07:30.139 00:15:00 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:30.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.139 --rc genhtml_branch_coverage=1 00:07:30.139 --rc genhtml_function_coverage=1 00:07:30.139 --rc genhtml_legend=1 00:07:30.139 --rc geninfo_all_blocks=1 00:07:30.139 --rc geninfo_unexecuted_blocks=1 00:07:30.139 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:30.139 ' 00:07:30.139 00:15:00 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:30.139 00:15:00 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:30.139 00:15:00 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:30.139 00:15:00 thread -- common/autotest_common.sh@10 -- # set +x 00:07:30.139 ************************************ 00:07:30.139 START TEST thread_poller_perf 00:07:30.139 ************************************ 00:07:30.139 00:15:00 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:30.139 [2024-10-09 00:15:00.732616] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:07:30.139 [2024-10-09 00:15:00.732700] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3883820 ] 00:07:30.397 [2024-10-09 00:15:00.810100] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.397 [2024-10-09 00:15:00.898455] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.397 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:31.774 [2024-10-08T22:15:02.409Z] ====================================== 00:07:31.774 [2024-10-08T22:15:02.409Z] busy:2302993154 (cyc) 00:07:31.774 [2024-10-08T22:15:02.409Z] total_run_count: 814000 00:07:31.774 [2024-10-08T22:15:02.409Z] tsc_hz: 2300000000 (cyc) 00:07:31.774 [2024-10-08T22:15:02.409Z] ====================================== 00:07:31.774 [2024-10-08T22:15:02.409Z] poller_cost: 2829 (cyc), 1230 (nsec) 00:07:31.774 00:07:31.774 real 0m1.263s 00:07:31.774 user 0m1.163s 00:07:31.774 sys 0m0.094s 00:07:31.775 00:15:01 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:31.775 00:15:01 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:31.775 ************************************ 00:07:31.775 END TEST thread_poller_perf 00:07:31.775 ************************************ 00:07:31.775 00:15:02 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:31.775 00:15:02 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:31.775 00:15:02 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:31.775 00:15:02 thread -- common/autotest_common.sh@10 -- # set +x 00:07:31.775 ************************************ 00:07:31.775 START TEST thread_poller_perf 00:07:31.775 ************************************ 00:07:31.775 00:15:02 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:31.775 [2024-10-09 00:15:02.065062] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:07:31.775 [2024-10-09 00:15:02.065121] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3884087 ] 00:07:31.775 [2024-10-09 00:15:02.135289] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.775 [2024-10-09 00:15:02.215623] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.775 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:32.710 [2024-10-08T22:15:03.345Z] ====================================== 00:07:32.710 [2024-10-08T22:15:03.345Z] busy:2301395104 (cyc) 00:07:32.710 [2024-10-08T22:15:03.345Z] total_run_count: 13003000 00:07:32.710 [2024-10-08T22:15:03.345Z] tsc_hz: 2300000000 (cyc) 00:07:32.710 [2024-10-08T22:15:03.345Z] ====================================== 00:07:32.710 [2024-10-08T22:15:03.345Z] poller_cost: 176 (cyc), 76 (nsec) 00:07:32.710 00:07:32.710 real 0m1.234s 00:07:32.710 user 0m1.139s 00:07:32.710 sys 0m0.089s 00:07:32.711 00:15:03 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:32.711 00:15:03 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:32.711 ************************************ 00:07:32.711 END TEST thread_poller_perf 00:07:32.711 ************************************ 00:07:32.711 00:15:03 thread -- thread/thread.sh@17 -- # [[ n != \y ]] 00:07:32.711 00:15:03 thread -- thread/thread.sh@18 -- # run_test thread_spdk_lock /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:07:32.711 00:15:03 thread -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:32.711 00:15:03 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:32.711 00:15:03 thread -- common/autotest_common.sh@10 -- # set +x 00:07:32.969 ************************************ 00:07:32.969 START TEST thread_spdk_lock 00:07:32.969 ************************************ 00:07:32.969 00:15:03 thread.thread_spdk_lock -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:07:32.969 [2024-10-09 00:15:03.387107] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:07:32.969 [2024-10-09 00:15:03.387188] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3884287 ] 00:07:32.969 [2024-10-09 00:15:03.461477] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:32.969 [2024-10-09 00:15:03.550587] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:07:32.969 [2024-10-09 00:15:03.550589] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.537 [2024-10-09 00:15:04.040743] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 980:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:07:33.537 [2024-10-09 00:15:04.040782] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3099:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:07:33.537 [2024-10-09 00:15:04.040793] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3054:sspin_stacks_print: *ERROR*: spinlock 0x14c6500 00:07:33.538 [2024-10-09 00:15:04.041680] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 875:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:07:33.538 [2024-10-09 00:15:04.041786] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:1041:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:07:33.538 [2024-10-09 00:15:04.041805] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 875:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:07:33.538 Starting test contend 00:07:33.538 Worker Delay Wait us Hold us Total us 00:07:33.538 0 3 164880 188893 353773 00:07:33.538 1 5 87114 286557 373672 00:07:33.538 PASS test contend 00:07:33.538 Starting test hold_by_poller 00:07:33.538 PASS test hold_by_poller 00:07:33.538 Starting test hold_by_message 00:07:33.538 PASS test hold_by_message 00:07:33.538 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock summary: 00:07:33.538 100014 assertions passed 00:07:33.538 0 assertions failed 00:07:33.538 00:07:33.538 real 0m0.747s 00:07:33.538 user 0m1.136s 00:07:33.538 sys 0m0.097s 00:07:33.538 00:15:04 thread.thread_spdk_lock -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:33.538 00:15:04 thread.thread_spdk_lock -- common/autotest_common.sh@10 -- # set +x 00:07:33.538 ************************************ 00:07:33.538 END TEST thread_spdk_lock 00:07:33.538 ************************************ 00:07:33.538 00:07:33.538 real 0m3.658s 00:07:33.538 user 0m3.631s 00:07:33.538 sys 0m0.534s 00:07:33.538 00:15:04 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:33.538 00:15:04 thread -- common/autotest_common.sh@10 -- # set +x 00:07:33.538 ************************************ 00:07:33.538 END TEST thread 00:07:33.538 ************************************ 00:07:33.796 00:15:04 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:33.796 00:15:04 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:07:33.796 00:15:04 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:33.796 00:15:04 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:33.796 00:15:04 -- common/autotest_common.sh@10 -- # set +x 00:07:33.796 ************************************ 00:07:33.796 START TEST app_cmdline 00:07:33.796 ************************************ 00:07:33.796 00:15:04 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:07:33.796 * Looking for test storage... 00:07:33.796 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:07:33.796 00:15:04 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:33.796 00:15:04 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:07:33.796 00:15:04 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:33.796 00:15:04 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:33.796 00:15:04 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:33.796 00:15:04 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:33.796 00:15:04 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:33.796 00:15:04 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:33.796 00:15:04 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:33.796 00:15:04 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:33.796 00:15:04 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:33.796 00:15:04 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:33.796 00:15:04 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:33.796 00:15:04 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:33.796 00:15:04 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:33.796 00:15:04 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:33.797 00:15:04 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:33.797 00:15:04 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:33.797 00:15:04 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:33.797 00:15:04 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:33.797 00:15:04 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:33.797 00:15:04 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:33.797 00:15:04 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:33.797 00:15:04 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:33.797 00:15:04 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:33.797 00:15:04 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:33.797 00:15:04 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:33.797 00:15:04 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:33.797 00:15:04 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:33.797 00:15:04 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:33.797 00:15:04 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:33.797 00:15:04 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:33.797 00:15:04 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:33.797 00:15:04 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:33.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.797 --rc genhtml_branch_coverage=1 00:07:33.797 --rc genhtml_function_coverage=1 00:07:33.797 --rc genhtml_legend=1 00:07:33.797 --rc geninfo_all_blocks=1 00:07:33.797 --rc geninfo_unexecuted_blocks=1 00:07:33.797 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:33.797 ' 00:07:33.797 00:15:04 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:33.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.797 --rc genhtml_branch_coverage=1 00:07:33.797 --rc genhtml_function_coverage=1 00:07:33.797 --rc genhtml_legend=1 00:07:33.797 --rc geninfo_all_blocks=1 00:07:33.797 --rc geninfo_unexecuted_blocks=1 00:07:33.797 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:33.797 ' 00:07:33.797 00:15:04 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:33.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.797 --rc genhtml_branch_coverage=1 00:07:33.797 --rc genhtml_function_coverage=1 00:07:33.797 --rc genhtml_legend=1 00:07:33.797 --rc geninfo_all_blocks=1 00:07:33.797 --rc geninfo_unexecuted_blocks=1 00:07:33.797 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:33.797 ' 00:07:33.797 00:15:04 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:33.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.797 --rc genhtml_branch_coverage=1 00:07:33.797 --rc genhtml_function_coverage=1 00:07:33.797 --rc genhtml_legend=1 00:07:33.797 --rc geninfo_all_blocks=1 00:07:33.797 --rc geninfo_unexecuted_blocks=1 00:07:33.797 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:33.797 ' 00:07:33.797 00:15:04 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:33.797 00:15:04 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3884666 00:07:33.797 00:15:04 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:33.797 00:15:04 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3884666 00:07:33.797 00:15:04 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 3884666 ']' 00:07:33.797 00:15:04 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:33.797 00:15:04 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:33.797 00:15:04 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:33.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:33.797 00:15:04 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:33.797 00:15:04 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:33.797 [2024-10-09 00:15:04.416177] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:07:33.797 [2024-10-09 00:15:04.416235] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3884666 ] 00:07:34.055 [2024-10-09 00:15:04.489650] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.055 [2024-10-09 00:15:04.577293] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.327 00:15:04 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:34.327 00:15:04 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:07:34.327 00:15:04 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:34.588 { 00:07:34.588 "version": "SPDK v25.01-pre git sha1 6101e4048", 00:07:34.588 "fields": { 00:07:34.588 "major": 25, 00:07:34.588 "minor": 1, 00:07:34.588 "patch": 0, 00:07:34.588 "suffix": "-pre", 00:07:34.588 "commit": "6101e4048" 00:07:34.588 } 00:07:34.588 } 00:07:34.588 00:15:04 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:34.588 00:15:04 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:34.588 00:15:04 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:34.588 00:15:04 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:34.588 00:15:04 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:34.588 00:15:04 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:34.588 00:15:04 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:34.588 00:15:04 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.588 00:15:04 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:34.588 00:15:04 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.588 00:15:05 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:34.588 00:15:05 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:34.588 00:15:05 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:34.588 00:15:05 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:07:34.588 00:15:05 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:34.588 00:15:05 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:07:34.588 00:15:05 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:34.588 00:15:05 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:07:34.588 00:15:05 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:34.588 00:15:05 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:07:34.589 00:15:05 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:34.589 00:15:05 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:07:34.589 00:15:05 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py ]] 00:07:34.589 00:15:05 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:34.589 request: 00:07:34.589 { 00:07:34.589 "method": "env_dpdk_get_mem_stats", 00:07:34.589 "req_id": 1 00:07:34.589 } 00:07:34.589 Got JSON-RPC error response 00:07:34.589 response: 00:07:34.589 { 00:07:34.589 "code": -32601, 00:07:34.589 "message": "Method not found" 00:07:34.589 } 00:07:34.589 00:15:05 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:07:34.589 00:15:05 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:34.589 00:15:05 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:34.589 00:15:05 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:34.589 00:15:05 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3884666 00:07:34.589 00:15:05 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 3884666 ']' 00:07:34.589 00:15:05 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 3884666 00:07:34.848 00:15:05 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:07:34.848 00:15:05 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:34.848 00:15:05 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3884666 00:07:34.848 00:15:05 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:34.848 00:15:05 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:34.848 00:15:05 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3884666' 00:07:34.848 killing process with pid 3884666 00:07:34.848 00:15:05 app_cmdline -- common/autotest_common.sh@969 -- # kill 3884666 00:07:34.848 00:15:05 app_cmdline -- common/autotest_common.sh@974 -- # wait 3884666 00:07:35.107 00:07:35.107 real 0m1.416s 00:07:35.107 user 0m1.586s 00:07:35.107 sys 0m0.512s 00:07:35.107 00:15:05 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:35.107 00:15:05 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:35.107 ************************************ 00:07:35.107 END TEST app_cmdline 00:07:35.107 ************************************ 00:07:35.107 00:15:05 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:07:35.107 00:15:05 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:35.107 00:15:05 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:35.107 00:15:05 -- common/autotest_common.sh@10 -- # set +x 00:07:35.107 ************************************ 00:07:35.107 START TEST version 00:07:35.107 ************************************ 00:07:35.107 00:15:05 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:07:35.367 * Looking for test storage... 00:07:35.367 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:07:35.367 00:15:05 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:35.367 00:15:05 version -- common/autotest_common.sh@1681 -- # lcov --version 00:07:35.367 00:15:05 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:35.367 00:15:05 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:35.367 00:15:05 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:35.367 00:15:05 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:35.367 00:15:05 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:35.367 00:15:05 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:35.367 00:15:05 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:35.367 00:15:05 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:35.367 00:15:05 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:35.367 00:15:05 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:35.367 00:15:05 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:35.367 00:15:05 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:35.367 00:15:05 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:35.367 00:15:05 version -- scripts/common.sh@344 -- # case "$op" in 00:07:35.367 00:15:05 version -- scripts/common.sh@345 -- # : 1 00:07:35.367 00:15:05 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:35.367 00:15:05 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:35.367 00:15:05 version -- scripts/common.sh@365 -- # decimal 1 00:07:35.367 00:15:05 version -- scripts/common.sh@353 -- # local d=1 00:07:35.367 00:15:05 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:35.367 00:15:05 version -- scripts/common.sh@355 -- # echo 1 00:07:35.367 00:15:05 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:35.367 00:15:05 version -- scripts/common.sh@366 -- # decimal 2 00:07:35.367 00:15:05 version -- scripts/common.sh@353 -- # local d=2 00:07:35.368 00:15:05 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:35.368 00:15:05 version -- scripts/common.sh@355 -- # echo 2 00:07:35.368 00:15:05 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:35.368 00:15:05 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:35.368 00:15:05 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:35.368 00:15:05 version -- scripts/common.sh@368 -- # return 0 00:07:35.368 00:15:05 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:35.368 00:15:05 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:35.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.368 --rc genhtml_branch_coverage=1 00:07:35.368 --rc genhtml_function_coverage=1 00:07:35.368 --rc genhtml_legend=1 00:07:35.368 --rc geninfo_all_blocks=1 00:07:35.368 --rc geninfo_unexecuted_blocks=1 00:07:35.368 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:35.368 ' 00:07:35.368 00:15:05 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:35.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.368 --rc genhtml_branch_coverage=1 00:07:35.368 --rc genhtml_function_coverage=1 00:07:35.368 --rc genhtml_legend=1 00:07:35.368 --rc geninfo_all_blocks=1 00:07:35.368 --rc geninfo_unexecuted_blocks=1 00:07:35.368 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:35.368 ' 00:07:35.368 00:15:05 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:35.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.368 --rc genhtml_branch_coverage=1 00:07:35.368 --rc genhtml_function_coverage=1 00:07:35.368 --rc genhtml_legend=1 00:07:35.368 --rc geninfo_all_blocks=1 00:07:35.368 --rc geninfo_unexecuted_blocks=1 00:07:35.368 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:35.368 ' 00:07:35.368 00:15:05 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:35.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.368 --rc genhtml_branch_coverage=1 00:07:35.368 --rc genhtml_function_coverage=1 00:07:35.368 --rc genhtml_legend=1 00:07:35.368 --rc geninfo_all_blocks=1 00:07:35.368 --rc geninfo_unexecuted_blocks=1 00:07:35.368 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:35.368 ' 00:07:35.368 00:15:05 version -- app/version.sh@17 -- # get_header_version major 00:07:35.369 00:15:05 version -- app/version.sh@14 -- # cut -f2 00:07:35.369 00:15:05 version -- app/version.sh@14 -- # tr -d '"' 00:07:35.369 00:15:05 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:07:35.369 00:15:05 version -- app/version.sh@17 -- # major=25 00:07:35.369 00:15:05 version -- app/version.sh@18 -- # get_header_version minor 00:07:35.369 00:15:05 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:07:35.369 00:15:05 version -- app/version.sh@14 -- # cut -f2 00:07:35.369 00:15:05 version -- app/version.sh@14 -- # tr -d '"' 00:07:35.369 00:15:05 version -- app/version.sh@18 -- # minor=1 00:07:35.369 00:15:05 version -- app/version.sh@19 -- # get_header_version patch 00:07:35.369 00:15:05 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:07:35.369 00:15:05 version -- app/version.sh@14 -- # cut -f2 00:07:35.369 00:15:05 version -- app/version.sh@14 -- # tr -d '"' 00:07:35.369 00:15:05 version -- app/version.sh@19 -- # patch=0 00:07:35.369 00:15:05 version -- app/version.sh@20 -- # get_header_version suffix 00:07:35.369 00:15:05 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:07:35.369 00:15:05 version -- app/version.sh@14 -- # cut -f2 00:07:35.369 00:15:05 version -- app/version.sh@14 -- # tr -d '"' 00:07:35.369 00:15:05 version -- app/version.sh@20 -- # suffix=-pre 00:07:35.369 00:15:05 version -- app/version.sh@22 -- # version=25.1 00:07:35.369 00:15:05 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:35.369 00:15:05 version -- app/version.sh@28 -- # version=25.1rc0 00:07:35.369 00:15:05 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:07:35.369 00:15:05 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:35.369 00:15:05 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:35.369 00:15:05 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:35.369 00:07:35.369 real 0m0.268s 00:07:35.369 user 0m0.156s 00:07:35.369 sys 0m0.154s 00:07:35.369 00:15:05 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:35.369 00:15:05 version -- common/autotest_common.sh@10 -- # set +x 00:07:35.369 ************************************ 00:07:35.369 END TEST version 00:07:35.369 ************************************ 00:07:35.635 00:15:06 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:35.635 00:15:06 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:35.635 00:15:06 -- spdk/autotest.sh@194 -- # uname -s 00:07:35.635 00:15:06 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:35.635 00:15:06 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:35.635 00:15:06 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:35.635 00:15:06 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:07:35.635 00:15:06 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:07:35.635 00:15:06 -- spdk/autotest.sh@256 -- # timing_exit lib 00:07:35.635 00:15:06 -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:35.635 00:15:06 -- common/autotest_common.sh@10 -- # set +x 00:07:35.635 00:15:06 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:07:35.635 00:15:06 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:07:35.635 00:15:06 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:07:35.635 00:15:06 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:07:35.635 00:15:06 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:07:35.635 00:15:06 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:07:35.635 00:15:06 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:07:35.635 00:15:06 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:07:35.635 00:15:06 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:07:35.635 00:15:06 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:07:35.635 00:15:06 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:07:35.635 00:15:06 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:07:35.635 00:15:06 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:07:35.635 00:15:06 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:07:35.635 00:15:06 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:07:35.635 00:15:06 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:07:35.635 00:15:06 -- spdk/autotest.sh@370 -- # [[ 1 -eq 1 ]] 00:07:35.635 00:15:06 -- spdk/autotest.sh@371 -- # run_test llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:07:35.635 00:15:06 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:35.635 00:15:06 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:35.635 00:15:06 -- common/autotest_common.sh@10 -- # set +x 00:07:35.635 ************************************ 00:07:35.635 START TEST llvm_fuzz 00:07:35.635 ************************************ 00:07:35.635 00:15:06 llvm_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:07:35.635 * Looking for test storage... 00:07:35.635 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz 00:07:35.635 00:15:06 llvm_fuzz -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:35.635 00:15:06 llvm_fuzz -- common/autotest_common.sh@1681 -- # lcov --version 00:07:35.635 00:15:06 llvm_fuzz -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:35.636 00:15:06 llvm_fuzz -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:35.636 00:15:06 llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:35.636 00:15:06 llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:35.636 00:15:06 llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:35.636 00:15:06 llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:07:35.636 00:15:06 llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:07:35.636 00:15:06 llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:07:35.636 00:15:06 llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:07:35.636 00:15:06 llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:07:35.636 00:15:06 llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:07:35.636 00:15:06 llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:07:35.636 00:15:06 llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:35.636 00:15:06 llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:07:35.636 00:15:06 llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:07:35.636 00:15:06 llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:35.636 00:15:06 llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:35.636 00:15:06 llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:07:35.636 00:15:06 llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:07:35.636 00:15:06 llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:35.636 00:15:06 llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:07:35.636 00:15:06 llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:07:35.636 00:15:06 llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:07:35.636 00:15:06 llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:07:35.636 00:15:06 llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:35.636 00:15:06 llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:07:35.636 00:15:06 llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:07:35.636 00:15:06 llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:35.636 00:15:06 llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:35.636 00:15:06 llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:07:35.636 00:15:06 llvm_fuzz -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:35.636 00:15:06 llvm_fuzz -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:35.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.636 --rc genhtml_branch_coverage=1 00:07:35.636 --rc genhtml_function_coverage=1 00:07:35.636 --rc genhtml_legend=1 00:07:35.636 --rc geninfo_all_blocks=1 00:07:35.636 --rc geninfo_unexecuted_blocks=1 00:07:35.636 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:35.636 ' 00:07:35.637 00:15:06 llvm_fuzz -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:35.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.637 --rc genhtml_branch_coverage=1 00:07:35.637 --rc genhtml_function_coverage=1 00:07:35.637 --rc genhtml_legend=1 00:07:35.637 --rc geninfo_all_blocks=1 00:07:35.637 --rc geninfo_unexecuted_blocks=1 00:07:35.637 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:35.637 ' 00:07:35.637 00:15:06 llvm_fuzz -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:35.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.637 --rc genhtml_branch_coverage=1 00:07:35.637 --rc genhtml_function_coverage=1 00:07:35.637 --rc genhtml_legend=1 00:07:35.637 --rc geninfo_all_blocks=1 00:07:35.637 --rc geninfo_unexecuted_blocks=1 00:07:35.637 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:35.637 ' 00:07:35.637 00:15:06 llvm_fuzz -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:35.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.637 --rc genhtml_branch_coverage=1 00:07:35.637 --rc genhtml_function_coverage=1 00:07:35.637 --rc genhtml_legend=1 00:07:35.637 --rc geninfo_all_blocks=1 00:07:35.637 --rc geninfo_unexecuted_blocks=1 00:07:35.637 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:35.637 ' 00:07:35.637 00:15:06 llvm_fuzz -- fuzz/llvm.sh@11 -- # fuzzers=($(get_fuzzer_targets)) 00:07:35.637 00:15:06 llvm_fuzz -- fuzz/llvm.sh@11 -- # get_fuzzer_targets 00:07:35.637 00:15:06 llvm_fuzz -- common/autotest_common.sh@548 -- # fuzzers=() 00:07:35.637 00:15:06 llvm_fuzz -- common/autotest_common.sh@548 -- # local fuzzers 00:07:35.637 00:15:06 llvm_fuzz -- common/autotest_common.sh@550 -- # [[ -n '' ]] 00:07:35.637 00:15:06 llvm_fuzz -- common/autotest_common.sh@553 -- # fuzzers=("$rootdir/test/fuzz/llvm/"*) 00:07:35.637 00:15:06 llvm_fuzz -- common/autotest_common.sh@554 -- # fuzzers=("${fuzzers[@]##*/}") 00:07:35.637 00:15:06 llvm_fuzz -- common/autotest_common.sh@557 -- # echo 'common.sh llvm-gcov.sh nvmf vfio' 00:07:35.902 00:15:06 llvm_fuzz -- fuzz/llvm.sh@13 -- # llvm_out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm 00:07:35.902 00:15:06 llvm_fuzz -- fuzz/llvm.sh@15 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm 00:07:35.902 00:15:06 llvm_fuzz -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:07:35.902 00:15:06 llvm_fuzz -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:07:35.902 00:15:06 llvm_fuzz -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:07:35.902 00:15:06 llvm_fuzz -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:07:35.902 00:15:06 llvm_fuzz -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:07:35.902 00:15:06 llvm_fuzz -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:07:35.902 00:15:06 llvm_fuzz -- fuzz/llvm.sh@19 -- # run_test nvmf_llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:07:35.902 00:15:06 llvm_fuzz -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:35.902 00:15:06 llvm_fuzz -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:35.902 00:15:06 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:07:35.902 ************************************ 00:07:35.902 START TEST nvmf_llvm_fuzz 00:07:35.902 ************************************ 00:07:35.902 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:07:35.902 * Looking for test storage... 00:07:35.902 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:07:35.902 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:35.902 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1681 -- # lcov --version 00:07:35.902 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:35.902 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:35.902 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:35.902 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:35.902 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:35.902 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:07:35.902 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:07:35.902 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:07:35.902 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:07:35.902 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:07:35.902 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:07:35.902 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:07:35.902 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:35.902 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:07:35.902 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:07:35.902 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:35.902 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:35.902 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:07:35.902 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:07:35.902 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:35.902 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:07:35.902 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:07:35.902 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:07:35.902 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:07:35.902 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:35.902 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:07:35.902 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:07:35.902 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:35.902 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:35.902 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:07:35.902 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:35.902 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:35.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.902 --rc genhtml_branch_coverage=1 00:07:35.902 --rc genhtml_function_coverage=1 00:07:35.902 --rc genhtml_legend=1 00:07:35.902 --rc geninfo_all_blocks=1 00:07:35.902 --rc geninfo_unexecuted_blocks=1 00:07:35.902 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:35.902 ' 00:07:35.902 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:35.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.902 --rc genhtml_branch_coverage=1 00:07:35.902 --rc genhtml_function_coverage=1 00:07:35.902 --rc genhtml_legend=1 00:07:35.903 --rc geninfo_all_blocks=1 00:07:35.903 --rc geninfo_unexecuted_blocks=1 00:07:35.903 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:35.903 ' 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:35.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.903 --rc genhtml_branch_coverage=1 00:07:35.903 --rc genhtml_function_coverage=1 00:07:35.903 --rc genhtml_legend=1 00:07:35.903 --rc geninfo_all_blocks=1 00:07:35.903 --rc geninfo_unexecuted_blocks=1 00:07:35.903 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:35.903 ' 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:35.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.903 --rc genhtml_branch_coverage=1 00:07:35.903 --rc genhtml_function_coverage=1 00:07:35.903 --rc genhtml_legend=1 00:07:35.903 --rc geninfo_all_blocks=1 00:07:35.903 --rc geninfo_unexecuted_blocks=1 00:07:35.903 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:35.903 ' 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@60 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@34 -- # set -e 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@26 -- # CONFIG_AIO_FSDEV=y 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@27 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@28 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@29 -- # CONFIG_UBLK=y 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@30 -- # CONFIG_ISAL_CRYPTO=y 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@31 -- # CONFIG_OPENSSL_PATH= 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@32 -- # CONFIG_OCF=n 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@33 -- # CONFIG_FUSE=n 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@34 -- # CONFIG_VTUNE_DIR= 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@35 -- # CONFIG_FUZZER_LIB=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@36 -- # CONFIG_FUZZER=y 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@37 -- # CONFIG_FSDEV=y 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@38 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@39 -- # CONFIG_CRYPTO=n 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@40 -- # CONFIG_PGO_USE=n 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@41 -- # CONFIG_VHOST=y 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@42 -- # CONFIG_DAOS=n 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@43 -- # CONFIG_DPDK_INC_DIR= 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@44 -- # CONFIG_DAOS_DIR= 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@45 -- # CONFIG_UNIT_TESTS=n 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@46 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@47 -- # CONFIG_VIRTIO=y 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@48 -- # CONFIG_DPDK_UADK=n 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@50 -- # CONFIG_RDMA=y 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@51 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@52 -- # CONFIG_HAVE_LZ4=n 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@53 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@54 -- # CONFIG_URING_PATH= 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@55 -- # CONFIG_XNVME=n 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@56 -- # CONFIG_VFIO_USER=y 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@57 -- # CONFIG_ARCH=native 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@58 -- # CONFIG_HAVE_EVP_MAC=y 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@59 -- # CONFIG_URING_ZNS=n 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@60 -- # CONFIG_WERROR=y 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@61 -- # CONFIG_HAVE_LIBBSD=n 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@62 -- # CONFIG_UBSAN=y 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@63 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@64 -- # CONFIG_IPSEC_MB_DIR= 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@65 -- # CONFIG_GOLANG=n 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@66 -- # CONFIG_ISAL=y 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@67 -- # CONFIG_IDXD_KERNEL=y 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@68 -- # CONFIG_DPDK_LIB_DIR= 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@69 -- # CONFIG_RDMA_PROV=verbs 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@70 -- # CONFIG_APPS=y 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@71 -- # CONFIG_SHARED=n 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@72 -- # CONFIG_HAVE_KEYUTILS=y 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@73 -- # CONFIG_FC_PATH= 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@74 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@75 -- # CONFIG_FC=n 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@76 -- # CONFIG_AVAHI=n 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@77 -- # CONFIG_FIO_PLUGIN=y 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@78 -- # CONFIG_RAID5F=n 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@79 -- # CONFIG_EXAMPLES=y 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@80 -- # CONFIG_TESTS=y 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@81 -- # CONFIG_CRYPTO_MLX5=n 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@82 -- # CONFIG_MAX_LCORES=128 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@83 -- # CONFIG_IPSEC_MB=n 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@84 -- # CONFIG_PGO_DIR= 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@85 -- # CONFIG_DEBUG=y 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@86 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@87 -- # CONFIG_CROSS_PREFIX= 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@88 -- # CONFIG_COPY_FILE_RANGE=y 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@89 -- # CONFIG_URING=n 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:07:35.903 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:35.904 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:35.904 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:35.904 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:35.904 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:35.904 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:35.904 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:35.904 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:07:35.904 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:35.904 #define SPDK_CONFIG_H 00:07:35.904 #define SPDK_CONFIG_AIO_FSDEV 1 00:07:35.904 #define SPDK_CONFIG_APPS 1 00:07:35.904 #define SPDK_CONFIG_ARCH native 00:07:35.904 #undef SPDK_CONFIG_ASAN 00:07:35.904 #undef SPDK_CONFIG_AVAHI 00:07:35.904 #undef SPDK_CONFIG_CET 00:07:35.904 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:07:35.904 #define SPDK_CONFIG_COVERAGE 1 00:07:35.904 #define SPDK_CONFIG_CROSS_PREFIX 00:07:35.904 #undef SPDK_CONFIG_CRYPTO 00:07:35.904 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:35.904 #undef SPDK_CONFIG_CUSTOMOCF 00:07:35.904 #undef SPDK_CONFIG_DAOS 00:07:35.904 #define SPDK_CONFIG_DAOS_DIR 00:07:35.904 #define SPDK_CONFIG_DEBUG 1 00:07:35.904 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:35.904 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:07:35.904 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:35.904 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:35.904 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:35.904 #undef SPDK_CONFIG_DPDK_UADK 00:07:35.904 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:07:35.904 #define SPDK_CONFIG_EXAMPLES 1 00:07:35.904 #undef SPDK_CONFIG_FC 00:07:35.904 #define SPDK_CONFIG_FC_PATH 00:07:35.904 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:35.904 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:35.904 #define SPDK_CONFIG_FSDEV 1 00:07:35.904 #undef SPDK_CONFIG_FUSE 00:07:35.904 #define SPDK_CONFIG_FUZZER 1 00:07:35.904 #define SPDK_CONFIG_FUZZER_LIB /usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:07:35.904 #undef SPDK_CONFIG_GOLANG 00:07:35.904 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:35.904 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:35.904 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:35.904 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:35.904 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:35.904 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:35.904 #undef SPDK_CONFIG_HAVE_LZ4 00:07:35.904 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:07:35.904 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:07:35.904 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:35.904 #define SPDK_CONFIG_IDXD 1 00:07:35.904 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:35.904 #undef SPDK_CONFIG_IPSEC_MB 00:07:35.904 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:35.904 #define SPDK_CONFIG_ISAL 1 00:07:35.904 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:35.904 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:35.904 #define SPDK_CONFIG_LIBDIR 00:07:35.904 #undef SPDK_CONFIG_LTO 00:07:35.904 #define SPDK_CONFIG_MAX_LCORES 128 00:07:35.904 #define SPDK_CONFIG_NVME_CUSE 1 00:07:35.904 #undef SPDK_CONFIG_OCF 00:07:35.904 #define SPDK_CONFIG_OCF_PATH 00:07:35.904 #define SPDK_CONFIG_OPENSSL_PATH 00:07:35.904 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:35.904 #define SPDK_CONFIG_PGO_DIR 00:07:35.904 #undef SPDK_CONFIG_PGO_USE 00:07:35.904 #define SPDK_CONFIG_PREFIX /usr/local 00:07:35.904 #undef SPDK_CONFIG_RAID5F 00:07:35.904 #undef SPDK_CONFIG_RBD 00:07:35.904 #define SPDK_CONFIG_RDMA 1 00:07:35.904 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:35.904 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:35.904 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:35.904 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:35.904 #undef SPDK_CONFIG_SHARED 00:07:35.904 #undef SPDK_CONFIG_SMA 00:07:35.904 #define SPDK_CONFIG_TESTS 1 00:07:35.904 #undef SPDK_CONFIG_TSAN 00:07:35.904 #define SPDK_CONFIG_UBLK 1 00:07:35.904 #define SPDK_CONFIG_UBSAN 1 00:07:35.904 #undef SPDK_CONFIG_UNIT_TESTS 00:07:35.904 #undef SPDK_CONFIG_URING 00:07:35.904 #define SPDK_CONFIG_URING_PATH 00:07:35.904 #undef SPDK_CONFIG_URING_ZNS 00:07:35.904 #undef SPDK_CONFIG_USDT 00:07:35.904 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:35.904 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:35.904 #define SPDK_CONFIG_VFIO_USER 1 00:07:35.904 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:35.904 #define SPDK_CONFIG_VHOST 1 00:07:35.904 #define SPDK_CONFIG_VIRTIO 1 00:07:35.904 #undef SPDK_CONFIG_VTUNE 00:07:35.904 #define SPDK_CONFIG_VTUNE_DIR 00:07:35.904 #define SPDK_CONFIG_WERROR 1 00:07:35.904 #define SPDK_CONFIG_WPDK_DIR 00:07:35.904 #undef SPDK_CONFIG_XNVME 00:07:35.904 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:35.904 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:35.904 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:07:35.904 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:07:35.904 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:35.904 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:35.904 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:35.904 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.904 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.904 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.904 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@5 -- # export PATH 00:07:35.904 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.904 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:07:35.904 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@64 -- # TEST_TAG=N/A 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@68 -- # uname -s 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@68 -- # PM_OS=Linux 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@76 -- # SUDO[0]= 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power ]] 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@58 -- # : 0 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@62 -- # : 0 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@64 -- # : 0 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@66 -- # : 1 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@68 -- # : 0 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@70 -- # : 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@72 -- # : 0 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@74 -- # : 0 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@76 -- # : 0 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@78 -- # : 0 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@80 -- # : 0 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@82 -- # : 0 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@84 -- # : 0 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@86 -- # : 0 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@88 -- # : 0 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@90 -- # : 0 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@92 -- # : 0 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@94 -- # : 0 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@96 -- # : 0 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@98 -- # : 1 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@100 -- # : 1 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@102 -- # : rdma 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@104 -- # : 0 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@106 -- # : 0 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@108 -- # : 0 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@110 -- # : 0 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@112 -- # : 0 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@114 -- # : 0 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@116 -- # : 0 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@118 -- # : 0 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@120 -- # : 0 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@122 -- # : 0 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@124 -- # : 1 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@126 -- # : 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@128 -- # : 0 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@130 -- # : 0 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@132 -- # : 0 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@134 -- # : 0 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@136 -- # : 0 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@138 -- # : 0 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@140 -- # : 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@142 -- # : true 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@144 -- # : 0 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@146 -- # : 0 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@148 -- # : 0 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@150 -- # : 0 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@152 -- # : 0 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:07:36.166 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@154 -- # : 00:07:36.167 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:07:36.167 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@156 -- # : 0 00:07:36.167 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:07:36.167 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@158 -- # : 0 00:07:36.167 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:07:36.167 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@160 -- # : 0 00:07:36.167 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:07:36.167 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@162 -- # : 0 00:07:36.167 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:07:36.167 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@164 -- # : 0 00:07:36.167 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:07:36.167 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@166 -- # : 0 00:07:36.167 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:07:36.167 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@169 -- # : 00:07:36.167 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:07:36.167 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@171 -- # : 0 00:07:36.167 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:07:36.167 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@173 -- # : 0 00:07:36.167 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:36.167 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@175 -- # : 1 00:07:36.167 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:07:36.167 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:07:36.167 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:07:36.167 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:07:36.167 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:07:36.167 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:36.167 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:36.167 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:36.167 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:36.167 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:36.167 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:36.167 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:07:36.167 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:07:36.167 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:36.167 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:07:36.167 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:36.167 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:36.167 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:36.167 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:36.167 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:36.167 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:07:36.167 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@204 -- # cat 00:07:36.167 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:07:36.167 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:36.167 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:36.167 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:36.167 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:36.167 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:07:36.167 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:07:36.167 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:36.167 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:36.167 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:36.167 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:36.167 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:36.167 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:36.167 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:36.167 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:36.167 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:36.167 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:36.167 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:36.167 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:36.167 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:07:36.167 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:07:36.167 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@267 -- # _LCOV= 00:07:36.167 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:07:36.167 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@268 -- # [[ 1 -eq 1 ]] 00:07:36.167 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@268 -- # _LCOV=1 00:07:36.167 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:07:36.167 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:07:36.167 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@273 -- # lcov_opt='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:07:36.167 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:07:36.167 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@277 -- # export valgrind= 00:07:36.167 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@277 -- # valgrind= 00:07:36.167 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@283 -- # uname -s 00:07:36.167 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:07:36.167 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:07:36.167 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:07:36.167 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:07:36.167 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@287 -- # MAKE=make 00:07:36.167 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j72 00:07:36.167 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:07:36.167 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:07:36.167 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:07:36.167 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@307 -- # TEST_MODE= 00:07:36.167 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@329 -- # [[ -z 3885279 ]] 00:07:36.167 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@329 -- # kill -0 3885279 00:07:36.167 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:07:36.167 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:07:36.167 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:07:36.167 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@342 -- # local mount target_dir 00:07:36.167 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:07:36.167 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.yaLs8N 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf /tmp/spdk.yaLs8N/tests/nvmf /tmp/spdk.yaLs8N 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@338 -- # df -T 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=722997248 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=4561432576 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=86310506496 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=94500294656 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=8189788160 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=47246716928 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=47250145280 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=3428352 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=18894159872 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=18900062208 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=5902336 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=47249551360 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=47250149376 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=598016 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=9450016768 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=9450029056 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:07:36.168 * Looking for test storage... 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@379 -- # local target_space new_size 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@383 -- # mount=/ 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@385 -- # target_space=86310506496 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@392 -- # new_size=10404380672 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:07:36.168 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@400 -- # return 0 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1668 -- # set -o errtrace 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1672 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1673 -- # true 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1675 -- # xtrace_fd 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@27 -- # exec 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@29 -- # exec 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@18 -- # set -x 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1681 -- # lcov --version 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:36.168 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:36.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.169 --rc genhtml_branch_coverage=1 00:07:36.169 --rc genhtml_function_coverage=1 00:07:36.169 --rc genhtml_legend=1 00:07:36.169 --rc geninfo_all_blocks=1 00:07:36.169 --rc geninfo_unexecuted_blocks=1 00:07:36.169 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:36.169 ' 00:07:36.169 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:36.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.169 --rc genhtml_branch_coverage=1 00:07:36.169 --rc genhtml_function_coverage=1 00:07:36.169 --rc genhtml_legend=1 00:07:36.169 --rc geninfo_all_blocks=1 00:07:36.169 --rc geninfo_unexecuted_blocks=1 00:07:36.169 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:36.169 ' 00:07:36.169 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:36.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.169 --rc genhtml_branch_coverage=1 00:07:36.169 --rc genhtml_function_coverage=1 00:07:36.169 --rc genhtml_legend=1 00:07:36.169 --rc geninfo_all_blocks=1 00:07:36.169 --rc geninfo_unexecuted_blocks=1 00:07:36.169 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:36.169 ' 00:07:36.169 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:36.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.169 --rc genhtml_branch_coverage=1 00:07:36.169 --rc genhtml_function_coverage=1 00:07:36.169 --rc genhtml_legend=1 00:07:36.169 --rc geninfo_all_blocks=1 00:07:36.169 --rc geninfo_unexecuted_blocks=1 00:07:36.169 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:36.169 ' 00:07:36.169 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@61 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/../common.sh 00:07:36.169 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@8 -- # pids=() 00:07:36.169 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@63 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:07:36.169 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@64 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:07:36.169 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@64 -- # fuzz_num=25 00:07:36.169 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@65 -- # (( fuzz_num != 0 )) 00:07:36.169 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@67 -- # trap 'cleanup /tmp/llvm_fuzz* /var/tmp/suppress_nvmf_fuzz; exit 1' SIGINT SIGTERM EXIT 00:07:36.169 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@69 -- # mem_size=512 00:07:36.169 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@70 -- # [[ 1 -eq 1 ]] 00:07:36.169 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@71 -- # start_llvm_fuzz_short 25 1 00:07:36.169 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@69 -- # local fuzz_num=25 00:07:36.169 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@70 -- # local time=1 00:07:36.169 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i = 0 )) 00:07:36.169 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:36.169 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:07:36.169 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=0 00:07:36.169 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:36.169 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:36.169 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:07:36.169 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_0.conf 00:07:36.169 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:36.169 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:36.169 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 0 00:07:36.169 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4400 00:07:36.169 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:07:36.169 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' 00:07:36.169 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4400"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:36.169 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:36.169 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:36.169 00:15:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' -c /tmp/fuzz_json_0.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 -Z 0 00:07:36.428 [2024-10-09 00:15:06.807144] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:07:36.428 [2024-10-09 00:15:06.807223] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3885346 ] 00:07:36.686 [2024-10-09 00:15:07.113497] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.686 [2024-10-09 00:15:07.199333] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.686 [2024-10-09 00:15:07.258372] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:36.686 [2024-10-09 00:15:07.274625] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4400 *** 00:07:36.686 INFO: Running with entropic power schedule (0xFF, 100). 00:07:36.686 INFO: Seed: 4021968376 00:07:36.686 INFO: Loaded 1 modules (384346 inline 8-bit counters): 384346 [0x2bec84c, 0x2c4a5a6), 00:07:36.686 INFO: Loaded 1 PC tables (384346 PCs): 384346 [0x2c4a5a8,0x3227b48), 00:07:36.686 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:07:36.686 INFO: A corpus is not provided, starting from an empty corpus 00:07:36.686 #2 INITED exec/s: 0 rss: 66Mb 00:07:36.686 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:36.686 This may also happen if the target rejected all inputs we tried so far 00:07:36.944 [2024-10-09 00:15:07.322310] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.944 [2024-10-09 00:15:07.322343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.203 NEW_FUNC[1/714]: 0x43bbc8 in fuzz_admin_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:47 00:07:37.203 NEW_FUNC[2/714]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:37.203 #11 NEW cov: 12152 ft: 12148 corp: 2/81b lim: 320 exec/s: 0 rss: 73Mb L: 80/80 MS: 4 CopyPart-ShuffleBytes-ChangeByte-InsertRepeatedBytes- 00:07:37.203 [2024-10-09 00:15:07.643137] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.203 [2024-10-09 00:15:07.643175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.203 #12 NEW cov: 12265 ft: 12811 corp: 3/161b lim: 320 exec/s: 0 rss: 73Mb L: 80/80 MS: 1 ChangeBinInt- 00:07:37.203 [2024-10-09 00:15:07.703270] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.203 [2024-10-09 00:15:07.703300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.203 #13 NEW cov: 12271 ft: 13112 corp: 4/241b lim: 320 exec/s: 0 rss: 73Mb L: 80/80 MS: 1 ChangeBit- 00:07:37.203 [2024-10-09 00:15:07.743442] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.203 [2024-10-09 00:15:07.743469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.203 [2024-10-09 00:15:07.743521] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:37.203 [2024-10-09 00:15:07.743535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.203 #14 NEW cov: 12379 ft: 13520 corp: 5/372b lim: 320 exec/s: 0 rss: 73Mb L: 131/131 MS: 1 CopyPart- 00:07:37.203 [2024-10-09 00:15:07.783552] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.203 [2024-10-09 00:15:07.783579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.203 [2024-10-09 00:15:07.783629] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:37.203 [2024-10-09 00:15:07.783643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.203 #15 NEW cov: 12379 ft: 13608 corp: 6/503b lim: 320 exec/s: 0 rss: 74Mb L: 131/131 MS: 1 ShuffleBytes- 00:07:37.462 [2024-10-09 00:15:07.843596] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.462 [2024-10-09 00:15:07.843624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.462 #16 NEW cov: 12379 ft: 13674 corp: 7/592b lim: 320 exec/s: 0 rss: 74Mb L: 89/131 MS: 1 InsertRepeatedBytes- 00:07:37.462 [2024-10-09 00:15:07.903780] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0xff0000000000000a 00:07:37.462 [2024-10-09 00:15:07.903807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.462 #18 NEW cov: 12379 ft: 13710 corp: 8/672b lim: 320 exec/s: 0 rss: 74Mb L: 80/131 MS: 2 EraseBytes-CopyPart- 00:07:37.462 [2024-10-09 00:15:07.964075] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.462 [2024-10-09 00:15:07.964100] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.462 [2024-10-09 00:15:07.964151] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:37.462 [2024-10-09 00:15:07.964164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.462 #19 NEW cov: 12379 ft: 13744 corp: 9/803b lim: 320 exec/s: 0 rss: 74Mb L: 131/131 MS: 1 ShuffleBytes- 00:07:37.462 [2024-10-09 00:15:08.024134] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.462 [2024-10-09 00:15:08.024160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.462 #20 NEW cov: 12379 ft: 13757 corp: 10/883b lim: 320 exec/s: 0 rss: 74Mb L: 80/131 MS: 1 CrossOver- 00:07:37.462 [2024-10-09 00:15:08.064217] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.462 [2024-10-09 00:15:08.064249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.720 #21 NEW cov: 12379 ft: 13832 corp: 11/963b lim: 320 exec/s: 0 rss: 74Mb L: 80/131 MS: 1 ChangeBit- 00:07:37.720 [2024-10-09 00:15:08.124420] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.720 [2024-10-09 00:15:08.124448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.720 #22 NEW cov: 12379 ft: 13841 corp: 12/1043b lim: 320 exec/s: 0 rss: 74Mb L: 80/131 MS: 1 ChangeBinInt- 00:07:37.720 [2024-10-09 00:15:08.164469] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.720 [2024-10-09 00:15:08.164496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.720 NEW_FUNC[1/1]: 0x1bfc8d8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:658 00:07:37.720 #23 NEW cov: 12402 ft: 13862 corp: 13/1132b lim: 320 exec/s: 0 rss: 74Mb L: 89/131 MS: 1 CrossOver- 00:07:37.720 [2024-10-09 00:15:08.224660] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.720 [2024-10-09 00:15:08.224686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.720 #24 NEW cov: 12402 ft: 13867 corp: 14/1221b lim: 320 exec/s: 0 rss: 74Mb L: 89/131 MS: 1 ShuffleBytes- 00:07:37.720 [2024-10-09 00:15:08.264756] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.720 [2024-10-09 00:15:08.264781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.720 #30 NEW cov: 12402 ft: 13896 corp: 15/1340b lim: 320 exec/s: 0 rss: 74Mb L: 119/131 MS: 1 InsertRepeatedBytes- 00:07:37.720 [2024-10-09 00:15:08.304871] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.720 [2024-10-09 00:15:08.304897] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.720 #31 NEW cov: 12402 ft: 13917 corp: 16/1420b lim: 320 exec/s: 31 rss: 74Mb L: 80/131 MS: 1 ShuffleBytes- 00:07:37.979 [2024-10-09 00:15:08.365095] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.979 [2024-10-09 00:15:08.365122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.979 #32 NEW cov: 12402 ft: 13931 corp: 17/1500b lim: 320 exec/s: 32 rss: 74Mb L: 80/131 MS: 1 ShuffleBytes- 00:07:37.979 [2024-10-09 00:15:08.405146] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.979 [2024-10-09 00:15:08.405177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.979 #33 NEW cov: 12402 ft: 13982 corp: 18/1580b lim: 320 exec/s: 33 rss: 74Mb L: 80/131 MS: 1 ShuffleBytes- 00:07:37.979 [2024-10-09 00:15:08.445367] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.979 [2024-10-09 00:15:08.445393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.979 [2024-10-09 00:15:08.445452] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:0000ffff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:37.979 [2024-10-09 00:15:08.445467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.979 NEW_FUNC[1/1]: 0x14f93a8 in nvmf_tcp_req_set_cpl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/tcp.c:2213 00:07:37.979 #34 NEW cov: 12433 ft: 14046 corp: 19/1708b lim: 320 exec/s: 34 rss: 74Mb L: 128/131 MS: 1 InsertRepeatedBytes- 00:07:37.979 [2024-10-09 00:15:08.485398] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.979 [2024-10-09 00:15:08.485424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.979 #35 NEW cov: 12433 ft: 14064 corp: 20/1797b lim: 320 exec/s: 35 rss: 74Mb L: 89/131 MS: 1 ShuffleBytes- 00:07:37.979 [2024-10-09 00:15:08.545562] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.979 [2024-10-09 00:15:08.545588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.979 #37 NEW cov: 12433 ft: 14081 corp: 21/1876b lim: 320 exec/s: 37 rss: 74Mb L: 79/131 MS: 2 EraseBytes-InsertRepeatedBytes- 00:07:37.979 [2024-10-09 00:15:08.585929] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.979 [2024-10-09 00:15:08.585955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.979 [2024-10-09 00:15:08.586006] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:37.979 [2024-10-09 00:15:08.586020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.979 [2024-10-09 00:15:08.586071] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:37.979 [2024-10-09 00:15:08.586085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:38.236 #38 NEW cov: 12433 ft: 14300 corp: 22/2094b lim: 320 exec/s: 38 rss: 74Mb L: 218/218 MS: 1 CrossOver- 00:07:38.236 [2024-10-09 00:15:08.645857] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00400000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.236 [2024-10-09 00:15:08.645883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.236 #39 NEW cov: 12433 ft: 14317 corp: 23/2175b lim: 320 exec/s: 39 rss: 74Mb L: 81/218 MS: 1 InsertByte- 00:07:38.236 [2024-10-09 00:15:08.706032] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.236 [2024-10-09 00:15:08.706058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.236 #40 NEW cov: 12433 ft: 14338 corp: 24/2264b lim: 320 exec/s: 40 rss: 74Mb L: 89/218 MS: 1 ChangeBinInt- 00:07:38.236 [2024-10-09 00:15:08.746132] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.236 [2024-10-09 00:15:08.746158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.236 #41 NEW cov: 12433 ft: 14339 corp: 25/2331b lim: 320 exec/s: 41 rss: 75Mb L: 67/218 MS: 1 EraseBytes- 00:07:38.236 [2024-10-09 00:15:08.806293] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.237 [2024-10-09 00:15:08.806318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.237 #42 NEW cov: 12433 ft: 14360 corp: 26/2411b lim: 320 exec/s: 42 rss: 75Mb L: 80/218 MS: 1 ChangeBit- 00:07:38.237 [2024-10-09 00:15:08.846396] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.237 [2024-10-09 00:15:08.846422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.495 #43 NEW cov: 12433 ft: 14362 corp: 27/2492b lim: 320 exec/s: 43 rss: 75Mb L: 81/218 MS: 1 InsertByte- 00:07:38.495 [2024-10-09 00:15:08.906558] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.495 [2024-10-09 00:15:08.906585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.495 #44 NEW cov: 12433 ft: 14372 corp: 28/2573b lim: 320 exec/s: 44 rss: 75Mb L: 81/218 MS: 1 ChangeByte- 00:07:38.495 [2024-10-09 00:15:08.966727] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.495 [2024-10-09 00:15:08.966754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.495 #45 NEW cov: 12433 ft: 14390 corp: 29/2653b lim: 320 exec/s: 45 rss: 75Mb L: 80/218 MS: 1 ShuffleBytes- 00:07:38.495 [2024-10-09 00:15:09.026894] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.495 [2024-10-09 00:15:09.026921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.495 #46 NEW cov: 12433 ft: 14403 corp: 30/2733b lim: 320 exec/s: 46 rss: 75Mb L: 80/218 MS: 1 CMP- DE: "\000\000\000\0009\237~\353"- 00:07:38.495 [2024-10-09 00:15:09.067007] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.495 [2024-10-09 00:15:09.067033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.495 #47 NEW cov: 12433 ft: 14415 corp: 31/2822b lim: 320 exec/s: 47 rss: 75Mb L: 89/218 MS: 1 ChangeBit- 00:07:38.495 [2024-10-09 00:15:09.127194] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.495 [2024-10-09 00:15:09.127221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.767 #48 NEW cov: 12433 ft: 14420 corp: 32/2903b lim: 320 exec/s: 48 rss: 75Mb L: 81/218 MS: 1 ChangeByte- 00:07:38.767 [2024-10-09 00:15:09.167288] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.767 [2024-10-09 00:15:09.167314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.767 #49 NEW cov: 12433 ft: 14430 corp: 33/2983b lim: 320 exec/s: 49 rss: 75Mb L: 80/218 MS: 1 ChangeByte- 00:07:38.767 [2024-10-09 00:15:09.207606] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.767 [2024-10-09 00:15:09.207632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.767 [2024-10-09 00:15:09.207692] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:61616161 cdw11:61616161 SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:38.767 [2024-10-09 00:15:09.207706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.767 [2024-10-09 00:15:09.207767] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (61) qid:0 cid:6 nsid:61616161 cdw10:61616161 cdw11:61616161 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.767 [2024-10-09 00:15:09.207781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:38.767 NEW_FUNC[1/1]: 0x192ebb8 in nvme_get_sgl_unkeyed /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_qpair.c:143 00:07:38.767 #50 NEW cov: 12447 ft: 14826 corp: 34/3200b lim: 320 exec/s: 50 rss: 75Mb L: 217/218 MS: 1 InsertRepeatedBytes- 00:07:38.767 [2024-10-09 00:15:09.247508] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0xa000000 00:07:38.767 [2024-10-09 00:15:09.247538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.767 #51 NEW cov: 12447 ft: 14840 corp: 35/3324b lim: 320 exec/s: 51 rss: 75Mb L: 124/218 MS: 1 CrossOver- 00:07:38.767 [2024-10-09 00:15:09.307757] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.767 [2024-10-09 00:15:09.307783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.767 [2024-10-09 00:15:09.307837] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:38.767 [2024-10-09 00:15:09.307852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.767 #52 NEW cov: 12447 ft: 14843 corp: 36/3500b lim: 320 exec/s: 26 rss: 75Mb L: 176/218 MS: 1 CrossOver- 00:07:38.767 #52 DONE cov: 12447 ft: 14843 corp: 36/3500b lim: 320 exec/s: 26 rss: 75Mb 00:07:38.767 ###### Recommended dictionary. ###### 00:07:38.767 "\000\000\000\0009\237~\353" # Uses: 0 00:07:38.767 ###### End of recommended dictionary. ###### 00:07:38.767 Done 52 runs in 2 second(s) 00:07:39.033 00:15:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_0.conf /var/tmp/suppress_nvmf_fuzz 00:07:39.034 00:15:09 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:39.034 00:15:09 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:39.034 00:15:09 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:07:39.034 00:15:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=1 00:07:39.034 00:15:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:39.034 00:15:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:39.034 00:15:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:07:39.034 00:15:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_1.conf 00:07:39.034 00:15:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:39.034 00:15:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:39.034 00:15:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 1 00:07:39.034 00:15:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4401 00:07:39.034 00:15:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:07:39.034 00:15:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' 00:07:39.034 00:15:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4401"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:39.034 00:15:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:39.034 00:15:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:39.034 00:15:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' -c /tmp/fuzz_json_1.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 -Z 1 00:07:39.034 [2024-10-09 00:15:09.539475] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:07:39.034 [2024-10-09 00:15:09.539553] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3885736 ] 00:07:39.292 [2024-10-09 00:15:09.834988] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.551 [2024-10-09 00:15:09.928791] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.551 [2024-10-09 00:15:09.988129] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:39.551 [2024-10-09 00:15:10.004384] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4401 *** 00:07:39.551 INFO: Running with entropic power schedule (0xFF, 100). 00:07:39.551 INFO: Seed: 2459992806 00:07:39.551 INFO: Loaded 1 modules (384346 inline 8-bit counters): 384346 [0x2bec84c, 0x2c4a5a6), 00:07:39.551 INFO: Loaded 1 PC tables (384346 PCs): 384346 [0x2c4a5a8,0x3227b48), 00:07:39.551 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:07:39.551 INFO: A corpus is not provided, starting from an empty corpus 00:07:39.551 #2 INITED exec/s: 0 rss: 66Mb 00:07:39.551 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:39.551 This may also happen if the target rejected all inputs we tried so far 00:07:39.551 [2024-10-09 00:15:10.059824] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200003e3e 00:07:39.551 [2024-10-09 00:15:10.059946] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200003e3e 00:07:39.551 [2024-10-09 00:15:10.060061] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200003e3e 00:07:39.551 [2024-10-09 00:15:10.060295] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:3e3e023e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.551 [2024-10-09 00:15:10.060328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.551 [2024-10-09 00:15:10.060400] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:3e3e023e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.551 [2024-10-09 00:15:10.060418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.551 [2024-10-09 00:15:10.060486] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:3e3e023e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.551 [2024-10-09 00:15:10.060503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:39.810 NEW_FUNC[1/715]: 0x43c4c8 in fuzz_admin_get_log_page_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:67 00:07:39.810 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:39.810 #8 NEW cov: 12235 ft: 12234 corp: 2/21b lim: 30 exec/s: 0 rss: 73Mb L: 20/20 MS: 1 InsertRepeatedBytes- 00:07:39.810 [2024-10-09 00:15:10.410569] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200003e3e 00:07:39.810 [2024-10-09 00:15:10.410690] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200003e3e 00:07:39.810 [2024-10-09 00:15:10.410791] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200003e2d 00:07:39.810 [2024-10-09 00:15:10.411008] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:3e3e023e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.810 [2024-10-09 00:15:10.411056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.810 [2024-10-09 00:15:10.411123] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:3e3e023e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.810 [2024-10-09 00:15:10.411143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.810 [2024-10-09 00:15:10.411208] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:3e3e023e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.810 [2024-10-09 00:15:10.411228] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.069 #14 NEW cov: 12348 ft: 12832 corp: 3/41b lim: 30 exec/s: 0 rss: 73Mb L: 20/20 MS: 1 ChangeByte- 00:07:40.069 [2024-10-09 00:15:10.470668] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:40.069 [2024-10-09 00:15:10.470781] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:40.069 [2024-10-09 00:15:10.470910] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:40.070 [2024-10-09 00:15:10.471021] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:40.070 [2024-10-09 00:15:10.471225] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.070 [2024-10-09 00:15:10.471254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.070 [2024-10-09 00:15:10.471323] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.070 [2024-10-09 00:15:10.471343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.070 [2024-10-09 00:15:10.471409] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.070 [2024-10-09 00:15:10.471428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.070 [2024-10-09 00:15:10.471493] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.070 [2024-10-09 00:15:10.471513] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:40.070 #18 NEW cov: 12354 ft: 13651 corp: 4/67b lim: 30 exec/s: 0 rss: 73Mb L: 26/26 MS: 4 CopyPart-CopyPart-ChangeByte-InsertRepeatedBytes- 00:07:40.070 [2024-10-09 00:15:10.510674] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200003e3e 00:07:40.070 [2024-10-09 00:15:10.510780] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200003e3e 00:07:40.070 [2024-10-09 00:15:10.510990] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:3e3e023e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.070 [2024-10-09 00:15:10.511017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.070 [2024-10-09 00:15:10.511085] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:3e3e023e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.070 [2024-10-09 00:15:10.511104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.070 #19 NEW cov: 12439 ft: 14119 corp: 5/80b lim: 30 exec/s: 0 rss: 73Mb L: 13/26 MS: 1 EraseBytes- 00:07:40.070 [2024-10-09 00:15:10.550858] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200003e3e 00:07:40.070 [2024-10-09 00:15:10.550968] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200003e3e 00:07:40.070 [2024-10-09 00:15:10.551064] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200003e3e 00:07:40.070 [2024-10-09 00:15:10.551279] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:3e3e023e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.070 [2024-10-09 00:15:10.551306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.070 [2024-10-09 00:15:10.551373] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:3e3e023e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.070 [2024-10-09 00:15:10.551391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.070 [2024-10-09 00:15:10.551460] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:243e023e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.070 [2024-10-09 00:15:10.551477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.070 #20 NEW cov: 12439 ft: 14228 corp: 6/101b lim: 30 exec/s: 0 rss: 73Mb L: 21/26 MS: 1 InsertByte- 00:07:40.070 [2024-10-09 00:15:10.590973] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200003e3e 00:07:40.070 [2024-10-09 00:15:10.591082] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200003f3e 00:07:40.070 [2024-10-09 00:15:10.591187] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200003e3e 00:07:40.070 [2024-10-09 00:15:10.591397] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:3e3e023e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.070 [2024-10-09 00:15:10.591422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.070 [2024-10-09 00:15:10.591489] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:3e3e023e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.070 [2024-10-09 00:15:10.591506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.070 [2024-10-09 00:15:10.591571] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:3e24023e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.070 [2024-10-09 00:15:10.591588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.070 #21 NEW cov: 12439 ft: 14331 corp: 7/123b lim: 30 exec/s: 0 rss: 73Mb L: 22/26 MS: 1 InsertByte- 00:07:40.070 [2024-10-09 00:15:10.651063] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200003e3e 00:07:40.070 [2024-10-09 00:15:10.651189] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200003e3e 00:07:40.070 [2024-10-09 00:15:10.651389] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:3a3e023e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.070 [2024-10-09 00:15:10.651415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.070 [2024-10-09 00:15:10.651483] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:3e3e023e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.070 [2024-10-09 00:15:10.651500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.070 #22 NEW cov: 12439 ft: 14383 corp: 8/136b lim: 30 exec/s: 0 rss: 73Mb L: 13/26 MS: 1 ChangeBit- 00:07:40.329 [2024-10-09 00:15:10.711260] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200003e3e 00:07:40.329 [2024-10-09 00:15:10.711384] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200003f3e 00:07:40.329 [2024-10-09 00:15:10.711483] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300003e3e 00:07:40.329 [2024-10-09 00:15:10.711681] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:3e3e023e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.329 [2024-10-09 00:15:10.711707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.329 [2024-10-09 00:15:10.711776] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:3e3e023e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.329 [2024-10-09 00:15:10.711794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.329 [2024-10-09 00:15:10.711865] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:3e24833e cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.329 [2024-10-09 00:15:10.711889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.329 #23 NEW cov: 12439 ft: 14421 corp: 9/158b lim: 30 exec/s: 0 rss: 74Mb L: 22/26 MS: 1 ChangeByte- 00:07:40.329 [2024-10-09 00:15:10.771447] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200003e3e 00:07:40.329 [2024-10-09 00:15:10.771556] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200003e3e 00:07:40.329 [2024-10-09 00:15:10.771655] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200003e3e 00:07:40.329 [2024-10-09 00:15:10.771865] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:3e3e023e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.329 [2024-10-09 00:15:10.771895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.329 [2024-10-09 00:15:10.771963] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:3e3e023e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.329 [2024-10-09 00:15:10.771980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.329 [2024-10-09 00:15:10.772058] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:3e3e023e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.329 [2024-10-09 00:15:10.772074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.329 #24 NEW cov: 12439 ft: 14528 corp: 10/181b lim: 30 exec/s: 0 rss: 74Mb L: 23/26 MS: 1 CopyPart- 00:07:40.329 [2024-10-09 00:15:10.811626] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200003e3e 00:07:40.329 [2024-10-09 00:15:10.811742] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200003e3e 00:07:40.329 [2024-10-09 00:15:10.811853] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200003e3e 00:07:40.329 [2024-10-09 00:15:10.811956] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (63740) > buf size (4096) 00:07:40.329 [2024-10-09 00:15:10.812053] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200003e0a 00:07:40.329 [2024-10-09 00:15:10.812271] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:3e3e023e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.329 [2024-10-09 00:15:10.812299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.329 [2024-10-09 00:15:10.812370] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:3e3e023e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.329 [2024-10-09 00:15:10.812391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.329 [2024-10-09 00:15:10.812459] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:3e3e023e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.329 [2024-10-09 00:15:10.812477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.329 [2024-10-09 00:15:10.812552] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:3e3e0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.329 [2024-10-09 00:15:10.812571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:40.329 [2024-10-09 00:15:10.812652] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:00000200 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.329 [2024-10-09 00:15:10.812671] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:40.329 #25 NEW cov: 12462 ft: 14661 corp: 11/211b lim: 30 exec/s: 0 rss: 74Mb L: 30/30 MS: 1 InsertRepeatedBytes- 00:07:40.329 [2024-10-09 00:15:10.871732] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200003e3e 00:07:40.329 [2024-10-09 00:15:10.871861] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200003e3e 00:07:40.329 [2024-10-09 00:15:10.871964] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200003e3e 00:07:40.329 [2024-10-09 00:15:10.872180] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:3e3e023e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.329 [2024-10-09 00:15:10.872206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.329 [2024-10-09 00:15:10.872273] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:3e3e023e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.329 [2024-10-09 00:15:10.872292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.329 [2024-10-09 00:15:10.872357] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:3e3e023e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.329 [2024-10-09 00:15:10.872375] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.329 #26 NEW cov: 12462 ft: 14682 corp: 12/234b lim: 30 exec/s: 0 rss: 74Mb L: 23/30 MS: 1 ShuffleBytes- 00:07:40.329 [2024-10-09 00:15:10.911792] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200003e3e 00:07:40.329 [2024-10-09 00:15:10.911908] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200003e3e 00:07:40.329 [2024-10-09 00:15:10.912004] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200003e3e 00:07:40.329 [2024-10-09 00:15:10.912197] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:3e3e023e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.329 [2024-10-09 00:15:10.912223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.329 [2024-10-09 00:15:10.912289] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:3ebe023e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.329 [2024-10-09 00:15:10.912308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.329 [2024-10-09 00:15:10.912374] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:3e3e023e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.329 [2024-10-09 00:15:10.912392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.329 NEW_FUNC[1/1]: 0x1bfc8d8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:658 00:07:40.329 #27 NEW cov: 12485 ft: 14744 corp: 13/254b lim: 30 exec/s: 0 rss: 74Mb L: 20/30 MS: 1 ChangeBit- 00:07:40.329 [2024-10-09 00:15:10.951914] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200003e3e 00:07:40.329 [2024-10-09 00:15:10.952022] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200003e3e 00:07:40.329 [2024-10-09 00:15:10.952224] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:3e3e023e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.329 [2024-10-09 00:15:10.952250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.329 [2024-10-09 00:15:10.952317] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:3e3e023e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.329 [2024-10-09 00:15:10.952336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.588 #28 NEW cov: 12485 ft: 14757 corp: 14/267b lim: 30 exec/s: 0 rss: 74Mb L: 13/30 MS: 1 ShuffleBytes- 00:07:40.588 [2024-10-09 00:15:10.992035] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200003e3e 00:07:40.588 [2024-10-09 00:15:10.992158] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200003e3e 00:07:40.588 [2024-10-09 00:15:10.992259] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200003e3e 00:07:40.588 [2024-10-09 00:15:10.992467] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:3e3c023e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.588 [2024-10-09 00:15:10.992493] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.588 [2024-10-09 00:15:10.992561] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:3e3e023e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.588 [2024-10-09 00:15:10.992581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.588 [2024-10-09 00:15:10.992646] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:3e3e023e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.588 [2024-10-09 00:15:10.992663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.588 #29 NEW cov: 12485 ft: 14862 corp: 15/290b lim: 30 exec/s: 0 rss: 74Mb L: 23/30 MS: 1 ChangeBit- 00:07:40.588 [2024-10-09 00:15:11.032134] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200003e3e 00:07:40.588 [2024-10-09 00:15:11.032242] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200003e3e 00:07:40.588 [2024-10-09 00:15:11.032343] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200003e3e 00:07:40.588 [2024-10-09 00:15:11.032543] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:3e3e023e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.588 [2024-10-09 00:15:11.032569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.588 [2024-10-09 00:15:11.032637] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:3e3e023e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.588 [2024-10-09 00:15:11.032656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.588 [2024-10-09 00:15:11.032719] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:3e3e023e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.588 [2024-10-09 00:15:11.032737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.588 #30 NEW cov: 12485 ft: 14871 corp: 16/311b lim: 30 exec/s: 30 rss: 74Mb L: 21/30 MS: 1 CopyPart- 00:07:40.588 [2024-10-09 00:15:11.072294] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200003e3e 00:07:40.588 [2024-10-09 00:15:11.072397] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200003e3e 00:07:40.588 [2024-10-09 00:15:11.072516] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200003e3e 00:07:40.588 [2024-10-09 00:15:11.072619] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (63740) > buf size (4096) 00:07:40.588 [2024-10-09 00:15:11.072722] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200003e0a 00:07:40.588 [2024-10-09 00:15:11.072945] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:3e3e023e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.588 [2024-10-09 00:15:11.072972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.588 [2024-10-09 00:15:11.073039] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:3e3e023e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.588 [2024-10-09 00:15:11.073062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.588 [2024-10-09 00:15:11.073125] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:3e3e0210 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.588 [2024-10-09 00:15:11.073144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.588 [2024-10-09 00:15:11.073208] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:3e3e0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.588 [2024-10-09 00:15:11.073226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:40.588 [2024-10-09 00:15:11.073291] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:00000200 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.588 [2024-10-09 00:15:11.073310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:40.589 #31 NEW cov: 12485 ft: 14909 corp: 17/341b lim: 30 exec/s: 31 rss: 74Mb L: 30/30 MS: 1 ChangeByte- 00:07:40.589 [2024-10-09 00:15:11.132441] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200003e3e 00:07:40.589 [2024-10-09 00:15:11.132550] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200003e3e 00:07:40.589 [2024-10-09 00:15:11.132654] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200003e3e 00:07:40.589 [2024-10-09 00:15:11.132858] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:3e3e023e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.589 [2024-10-09 00:15:11.132885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.589 [2024-10-09 00:15:11.132952] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:3e3e023e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.589 [2024-10-09 00:15:11.132970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.589 [2024-10-09 00:15:11.133035] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:3e3e023e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.589 [2024-10-09 00:15:11.133054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.589 #32 NEW cov: 12485 ft: 14957 corp: 18/361b lim: 30 exec/s: 32 rss: 74Mb L: 20/30 MS: 1 ShuffleBytes- 00:07:40.589 [2024-10-09 00:15:11.172508] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200003e3e 00:07:40.589 [2024-10-09 00:15:11.172628] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200003e3e 00:07:40.589 [2024-10-09 00:15:11.172830] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:3a3e023e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.589 [2024-10-09 00:15:11.172855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.589 [2024-10-09 00:15:11.172924] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:3e3e023e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.589 [2024-10-09 00:15:11.172942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.589 #33 NEW cov: 12485 ft: 14974 corp: 19/374b lim: 30 exec/s: 33 rss: 74Mb L: 13/30 MS: 1 ChangeByte- 00:07:40.848 [2024-10-09 00:15:11.232751] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200003e3e 00:07:40.848 [2024-10-09 00:15:11.232866] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200003e3e 00:07:40.848 [2024-10-09 00:15:11.232967] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200003e3e 00:07:40.848 [2024-10-09 00:15:11.233070] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200000a0a 00:07:40.848 [2024-10-09 00:15:11.233286] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:3e3e023e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.848 [2024-10-09 00:15:11.233311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.848 [2024-10-09 00:15:11.233378] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:3e3e023e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.848 [2024-10-09 00:15:11.233395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.848 [2024-10-09 00:15:11.233461] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:243e023e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.848 [2024-10-09 00:15:11.233477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.848 [2024-10-09 00:15:11.233540] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:3e3e020a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.848 [2024-10-09 00:15:11.233556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:40.848 #34 NEW cov: 12485 ft: 14984 corp: 20/402b lim: 30 exec/s: 34 rss: 74Mb L: 28/30 MS: 1 InsertRepeatedBytes- 00:07:40.848 [2024-10-09 00:15:11.272790] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200003e3e 00:07:40.848 [2024-10-09 00:15:11.272922] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200003e3e 00:07:40.848 [2024-10-09 00:15:11.273127] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0d3e023e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.848 [2024-10-09 00:15:11.273155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.848 [2024-10-09 00:15:11.273223] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:3e3e023e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.848 [2024-10-09 00:15:11.273242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.848 #35 NEW cov: 12485 ft: 14996 corp: 21/415b lim: 30 exec/s: 35 rss: 74Mb L: 13/30 MS: 1 ChangeBinInt- 00:07:40.848 [2024-10-09 00:15:11.312952] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200003e3e 00:07:40.848 [2024-10-09 00:15:11.313060] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200002b3e 00:07:40.848 [2024-10-09 00:15:11.313164] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200003e3e 00:07:40.848 [2024-10-09 00:15:11.313364] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:3e3e023e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.849 [2024-10-09 00:15:11.313389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.849 [2024-10-09 00:15:11.313458] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:3e3e023e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.849 [2024-10-09 00:15:11.313478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.849 [2024-10-09 00:15:11.313542] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:3e3e023e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.849 [2024-10-09 00:15:11.313558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.849 #36 NEW cov: 12485 ft: 15013 corp: 22/436b lim: 30 exec/s: 36 rss: 74Mb L: 21/30 MS: 1 InsertByte- 00:07:40.849 [2024-10-09 00:15:11.373092] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200003e3e 00:07:40.849 [2024-10-09 00:15:11.373205] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200003e3e 00:07:40.849 [2024-10-09 00:15:11.373407] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:3e3e023e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.849 [2024-10-09 00:15:11.373433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.849 [2024-10-09 00:15:11.373501] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:3e3e023e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.849 [2024-10-09 00:15:11.373521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.849 #37 NEW cov: 12485 ft: 15022 corp: 23/449b lim: 30 exec/s: 37 rss: 74Mb L: 13/30 MS: 1 ShuffleBytes- 00:07:40.849 [2024-10-09 00:15:11.413245] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200003e3e 00:07:40.849 [2024-10-09 00:15:11.413353] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200003e3e 00:07:40.849 [2024-10-09 00:15:11.413456] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200003e3e 00:07:40.849 [2024-10-09 00:15:11.413670] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:3e3e023e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.849 [2024-10-09 00:15:11.413695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.849 [2024-10-09 00:15:11.413763] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:3e3e023e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.849 [2024-10-09 00:15:11.413780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.849 [2024-10-09 00:15:11.413849] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:3e3e023e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.849 [2024-10-09 00:15:11.413877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.849 #38 NEW cov: 12485 ft: 15032 corp: 24/471b lim: 30 exec/s: 38 rss: 74Mb L: 22/30 MS: 1 InsertByte- 00:07:40.849 [2024-10-09 00:15:11.473382] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200003e3e 00:07:40.849 [2024-10-09 00:15:11.473493] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200003e3e 00:07:40.849 [2024-10-09 00:15:11.473700] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:3e2e023e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.849 [2024-10-09 00:15:11.473726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.849 [2024-10-09 00:15:11.473794] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:3e3e023e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.849 [2024-10-09 00:15:11.473817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.109 #39 NEW cov: 12485 ft: 15067 corp: 25/484b lim: 30 exec/s: 39 rss: 74Mb L: 13/30 MS: 1 ChangeBit- 00:07:41.109 [2024-10-09 00:15:11.533599] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200003e3e 00:07:41.109 [2024-10-09 00:15:11.533710] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200002b3e 00:07:41.109 [2024-10-09 00:15:11.533821] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200003c3e 00:07:41.109 [2024-10-09 00:15:11.534027] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:3e3e023e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.109 [2024-10-09 00:15:11.534063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.109 [2024-10-09 00:15:11.534133] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:3e3e023e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.109 [2024-10-09 00:15:11.534150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.109 [2024-10-09 00:15:11.534213] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:3e3e023e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.109 [2024-10-09 00:15:11.534229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:41.109 #40 NEW cov: 12485 ft: 15104 corp: 26/505b lim: 30 exec/s: 40 rss: 74Mb L: 21/30 MS: 1 ChangeBit- 00:07:41.109 [2024-10-09 00:15:11.593711] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200003e3e 00:07:41.109 [2024-10-09 00:15:11.593821] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200003e3e 00:07:41.109 [2024-10-09 00:15:11.594024] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:3e3e023e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.109 [2024-10-09 00:15:11.594052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.109 [2024-10-09 00:15:11.594117] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:3e3e023e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.109 [2024-10-09 00:15:11.594135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.109 #41 NEW cov: 12485 ft: 15139 corp: 27/518b lim: 30 exec/s: 41 rss: 74Mb L: 13/30 MS: 1 ChangeBinInt- 00:07:41.109 [2024-10-09 00:15:11.633902] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000fdff 00:07:41.109 [2024-10-09 00:15:11.634014] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:41.109 [2024-10-09 00:15:11.634113] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:41.109 [2024-10-09 00:15:11.634219] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:41.109 [2024-10-09 00:15:11.634432] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.109 [2024-10-09 00:15:11.634457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.109 [2024-10-09 00:15:11.634527] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.109 [2024-10-09 00:15:11.634545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.109 [2024-10-09 00:15:11.634612] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.109 [2024-10-09 00:15:11.634629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:41.109 [2024-10-09 00:15:11.634694] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.109 [2024-10-09 00:15:11.634712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:41.109 #42 NEW cov: 12485 ft: 15156 corp: 28/544b lim: 30 exec/s: 42 rss: 74Mb L: 26/30 MS: 1 ChangeBinInt- 00:07:41.109 [2024-10-09 00:15:11.693949] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200003e3e 00:07:41.109 [2024-10-09 00:15:11.694145] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:3e3e023e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.109 [2024-10-09 00:15:11.694174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.109 #43 NEW cov: 12485 ft: 15512 corp: 29/553b lim: 30 exec/s: 43 rss: 74Mb L: 9/30 MS: 1 EraseBytes- 00:07:41.109 [2024-10-09 00:15:11.734120] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200003e3e 00:07:41.109 [2024-10-09 00:15:11.734229] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200003e3e 00:07:41.109 [2024-10-09 00:15:11.734331] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200003e3e 00:07:41.109 [2024-10-09 00:15:11.734529] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:3e3e02be cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.109 [2024-10-09 00:15:11.734553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.109 [2024-10-09 00:15:11.734618] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:3e3e023e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.109 [2024-10-09 00:15:11.734634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.110 [2024-10-09 00:15:11.734696] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:3e3e023e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.110 [2024-10-09 00:15:11.734711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:41.369 #44 NEW cov: 12485 ft: 15541 corp: 30/573b lim: 30 exec/s: 44 rss: 75Mb L: 20/30 MS: 1 ShuffleBytes- 00:07:41.369 [2024-10-09 00:15:11.794272] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200003e3e 00:07:41.369 [2024-10-09 00:15:11.794380] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200003e3e 00:07:41.369 [2024-10-09 00:15:11.794485] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200003e3e 00:07:41.369 [2024-10-09 00:15:11.794709] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:3e3e023e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.369 [2024-10-09 00:15:11.794734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.369 [2024-10-09 00:15:11.794803] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:3e3e023e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.369 [2024-10-09 00:15:11.794827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.369 [2024-10-09 00:15:11.794904] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:3e3e023e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.369 [2024-10-09 00:15:11.794922] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:41.369 #45 NEW cov: 12485 ft: 15551 corp: 31/595b lim: 30 exec/s: 45 rss: 75Mb L: 22/30 MS: 1 ChangeBit- 00:07:41.369 [2024-10-09 00:15:11.854441] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000c1c1 00:07:41.369 [2024-10-09 00:15:11.854554] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000b83e 00:07:41.369 [2024-10-09 00:15:11.854658] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200003e3e 00:07:41.369 [2024-10-09 00:15:11.854858] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:3e3e02be cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.369 [2024-10-09 00:15:11.854884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.369 [2024-10-09 00:15:11.854951] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:c1c181c1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.369 [2024-10-09 00:15:11.854975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.369 [2024-10-09 00:15:11.855039] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:3e3e023e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.369 [2024-10-09 00:15:11.855056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:41.369 #46 NEW cov: 12485 ft: 15561 corp: 32/615b lim: 30 exec/s: 46 rss: 75Mb L: 20/30 MS: 1 ChangeBinInt- 00:07:41.369 [2024-10-09 00:15:11.914595] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200003e3e 00:07:41.370 [2024-10-09 00:15:11.914709] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (63740) > buf size (4096) 00:07:41.370 [2024-10-09 00:15:11.914818] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x3e3e 00:07:41.370 [2024-10-09 00:15:11.915016] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:3e3e023e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.370 [2024-10-09 00:15:11.915041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.370 [2024-10-09 00:15:11.915107] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:3e3e00e8 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.370 [2024-10-09 00:15:11.915124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.370 [2024-10-09 00:15:11.915187] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:e8e800e8 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.370 [2024-10-09 00:15:11.915204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:41.370 #47 NEW cov: 12485 ft: 15576 corp: 33/636b lim: 30 exec/s: 47 rss: 75Mb L: 21/30 MS: 1 InsertRepeatedBytes- 00:07:41.370 [2024-10-09 00:15:11.954738] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200003e3e 00:07:41.370 [2024-10-09 00:15:11.954854] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200003e3e 00:07:41.370 [2024-10-09 00:15:11.954957] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200005959 00:07:41.370 [2024-10-09 00:15:11.955057] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100005959 00:07:41.370 [2024-10-09 00:15:11.955255] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:3e3e023e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.370 [2024-10-09 00:15:11.955280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.370 [2024-10-09 00:15:11.955347] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:3e3e023e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.370 [2024-10-09 00:15:11.955364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.370 [2024-10-09 00:15:11.955427] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:3e3e023e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.370 [2024-10-09 00:15:11.955444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:41.370 [2024-10-09 00:15:11.955507] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:59598159 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.370 [2024-10-09 00:15:11.955523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:41.370 #48 NEW cov: 12485 ft: 15600 corp: 34/664b lim: 30 exec/s: 48 rss: 75Mb L: 28/30 MS: 1 InsertRepeatedBytes- 00:07:41.370 [2024-10-09 00:15:11.994790] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200003e3e 00:07:41.370 [2024-10-09 00:15:11.994911] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200003e3e 00:07:41.370 [2024-10-09 00:15:11.995108] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:3a3e023e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.370 [2024-10-09 00:15:11.995133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.370 [2024-10-09 00:15:11.995200] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:3e3e0243 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.370 [2024-10-09 00:15:11.995217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.635 #49 NEW cov: 12485 ft: 15607 corp: 35/678b lim: 30 exec/s: 49 rss: 75Mb L: 14/30 MS: 1 InsertByte- 00:07:41.635 [2024-10-09 00:15:12.035015] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200003e3e 00:07:41.635 [2024-10-09 00:15:12.035124] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (63740) > buf size (4096) 00:07:41.635 [2024-10-09 00:15:12.035228] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200003e3e 00:07:41.635 [2024-10-09 00:15:12.035328] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x3e3e 00:07:41.635 [2024-10-09 00:15:12.035434] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200003e0a 00:07:41.635 [2024-10-09 00:15:12.035638] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:3e3e023e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.635 [2024-10-09 00:15:12.035663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.635 [2024-10-09 00:15:12.035728] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:3e3e003e cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.635 [2024-10-09 00:15:12.035744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.635 [2024-10-09 00:15:12.035808] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000200 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.635 [2024-10-09 00:15:12.035833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:41.635 [2024-10-09 00:15:12.035899] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:3f3e003e cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.635 [2024-10-09 00:15:12.035917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:41.635 [2024-10-09 00:15:12.035979] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:3e3e023e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.635 [2024-10-09 00:15:12.035997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:41.635 #50 NEW cov: 12485 ft: 15655 corp: 36/708b lim: 30 exec/s: 25 rss: 75Mb L: 30/30 MS: 1 CrossOver- 00:07:41.635 #50 DONE cov: 12485 ft: 15655 corp: 36/708b lim: 30 exec/s: 25 rss: 75Mb 00:07:41.635 Done 50 runs in 2 second(s) 00:07:41.635 00:15:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_1.conf /var/tmp/suppress_nvmf_fuzz 00:07:41.635 00:15:12 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:41.635 00:15:12 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:41.635 00:15:12 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:07:41.635 00:15:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=2 00:07:41.635 00:15:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:41.635 00:15:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:41.635 00:15:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:07:41.635 00:15:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_2.conf 00:07:41.635 00:15:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:41.635 00:15:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:41.635 00:15:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 2 00:07:41.635 00:15:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4402 00:07:41.635 00:15:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:07:41.635 00:15:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' 00:07:41.636 00:15:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4402"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:41.636 00:15:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:41.636 00:15:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:41.636 00:15:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' -c /tmp/fuzz_json_2.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 -Z 2 00:07:41.636 [2024-10-09 00:15:12.233053] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:07:41.636 [2024-10-09 00:15:12.233131] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3886086 ] 00:07:42.214 [2024-10-09 00:15:12.541446] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.214 [2024-10-09 00:15:12.631493] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.214 [2024-10-09 00:15:12.690471] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:42.214 [2024-10-09 00:15:12.706706] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4402 *** 00:07:42.214 INFO: Running with entropic power schedule (0xFF, 100). 00:07:42.214 INFO: Seed: 867029359 00:07:42.214 INFO: Loaded 1 modules (384346 inline 8-bit counters): 384346 [0x2bec84c, 0x2c4a5a6), 00:07:42.214 INFO: Loaded 1 PC tables (384346 PCs): 384346 [0x2c4a5a8,0x3227b48), 00:07:42.214 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:07:42.214 INFO: A corpus is not provided, starting from an empty corpus 00:07:42.214 #2 INITED exec/s: 0 rss: 66Mb 00:07:42.214 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:42.214 This may also happen if the target rejected all inputs we tried so far 00:07:42.214 [2024-10-09 00:15:12.762485] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:f0f0000a cdw11:f000f0f0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.214 [2024-10-09 00:15:12.762515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.214 [2024-10-09 00:15:12.762574] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:f0f000f0 cdw11:f000f0f0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.214 [2024-10-09 00:15:12.762589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.214 [2024-10-09 00:15:12.762646] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:f0f000f0 cdw11:f000f0f0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.214 [2024-10-09 00:15:12.762660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:42.473 NEW_FUNC[1/714]: 0x43ef78 in fuzz_admin_identify_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:95 00:07:42.473 NEW_FUNC[2/714]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:42.473 #8 NEW cov: 12191 ft: 12190 corp: 2/26b lim: 35 exec/s: 0 rss: 73Mb L: 25/25 MS: 1 InsertRepeatedBytes- 00:07:42.473 [2024-10-09 00:15:13.103057] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0000008a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.473 [2024-10-09 00:15:13.103095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.731 #11 NEW cov: 12304 ft: 13218 corp: 3/33b lim: 35 exec/s: 0 rss: 73Mb L: 7/25 MS: 3 InsertByte-EraseBytes-InsertRepeatedBytes- 00:07:42.731 [2024-10-09 00:15:13.143053] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:6e00008a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.731 [2024-10-09 00:15:13.143083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.731 #12 NEW cov: 12310 ft: 13486 corp: 4/40b lim: 35 exec/s: 0 rss: 73Mb L: 7/25 MS: 1 ChangeByte- 00:07:42.731 [2024-10-09 00:15:13.203483] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:f0f0000a cdw11:f000f0f0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.731 [2024-10-09 00:15:13.203511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.731 [2024-10-09 00:15:13.203569] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:f0f000f0 cdw11:f000f0f0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.731 [2024-10-09 00:15:13.203584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.731 [2024-10-09 00:15:13.203640] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:f0f0000a cdw11:f000f0f0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.731 [2024-10-09 00:15:13.203655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:42.731 #13 NEW cov: 12395 ft: 13743 corp: 5/66b lim: 35 exec/s: 0 rss: 74Mb L: 26/26 MS: 1 CrossOver- 00:07:42.731 [2024-10-09 00:15:13.263328] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:6e00008a cdw11:00000a00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.731 [2024-10-09 00:15:13.263354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.731 #14 NEW cov: 12395 ft: 13866 corp: 6/73b lim: 35 exec/s: 0 rss: 74Mb L: 7/26 MS: 1 CrossOver- 00:07:42.731 [2024-10-09 00:15:13.323434] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:42.731 [2024-10-09 00:15:13.324049] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff0000 cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.731 [2024-10-09 00:15:13.324078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.731 [2024-10-09 00:15:13.324136] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.731 [2024-10-09 00:15:13.324150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.731 [2024-10-09 00:15:13.324207] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.731 [2024-10-09 00:15:13.324221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:42.731 [2024-10-09 00:15:13.324275] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.731 [2024-10-09 00:15:13.324292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:42.731 [2024-10-09 00:15:13.324349] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.731 [2024-10-09 00:15:13.324363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:42.731 #16 NEW cov: 12406 ft: 14532 corp: 7/108b lim: 35 exec/s: 0 rss: 74Mb L: 35/35 MS: 2 ChangeBinInt-InsertRepeatedBytes- 00:07:42.731 [2024-10-09 00:15:13.363673] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:7e00008a cdw11:00000a00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.731 [2024-10-09 00:15:13.363699] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.990 #17 NEW cov: 12406 ft: 14587 corp: 8/115b lim: 35 exec/s: 0 rss: 74Mb L: 7/35 MS: 1 ChangeBit- 00:07:42.990 [2024-10-09 00:15:13.424225] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.990 [2024-10-09 00:15:13.424251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.990 [2024-10-09 00:15:13.424310] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.990 [2024-10-09 00:15:13.424325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.990 [2024-10-09 00:15:13.424380] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.990 [2024-10-09 00:15:13.424393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:42.990 [2024-10-09 00:15:13.424448] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.990 [2024-10-09 00:15:13.424462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:42.990 #18 NEW cov: 12406 ft: 14681 corp: 9/143b lim: 35 exec/s: 0 rss: 74Mb L: 28/35 MS: 1 InsertRepeatedBytes- 00:07:42.990 [2024-10-09 00:15:13.463897] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0500008a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.990 [2024-10-09 00:15:13.463923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.990 #21 NEW cov: 12406 ft: 14722 corp: 10/155b lim: 35 exec/s: 0 rss: 74Mb L: 12/35 MS: 3 EraseBytes-ChangeBinInt-CrossOver- 00:07:42.990 [2024-10-09 00:15:13.504043] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0500008a cdw11:0000003d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.990 [2024-10-09 00:15:13.504071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.990 #22 NEW cov: 12406 ft: 14754 corp: 11/167b lim: 35 exec/s: 0 rss: 74Mb L: 12/35 MS: 1 ChangeByte- 00:07:42.990 [2024-10-09 00:15:13.564554] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:f0f0000a cdw11:f000f0f0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.990 [2024-10-09 00:15:13.564582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.990 [2024-10-09 00:15:13.564641] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:f0f000f0 cdw11:f000f0f0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.990 [2024-10-09 00:15:13.564659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.990 [2024-10-09 00:15:13.564718] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:f0f0000a cdw11:f000f0f0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.990 [2024-10-09 00:15:13.564732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:42.990 [2024-10-09 00:15:13.564789] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:070700f0 cdw11:f00007f0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.990 [2024-10-09 00:15:13.564803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:42.990 #23 NEW cov: 12406 ft: 14815 corp: 12/196b lim: 35 exec/s: 0 rss: 74Mb L: 29/35 MS: 1 InsertRepeatedBytes- 00:07:43.249 [2024-10-09 00:15:13.624385] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:6e00008a cdw11:0a000a00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.249 [2024-10-09 00:15:13.624412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.249 NEW_FUNC[1/1]: 0x1bfc8d8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:658 00:07:43.249 #24 NEW cov: 12429 ft: 14883 corp: 13/203b lim: 35 exec/s: 0 rss: 74Mb L: 7/35 MS: 1 CopyPart- 00:07:43.249 [2024-10-09 00:15:13.664723] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:f0f0000a cdw11:f000f0f0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.249 [2024-10-09 00:15:13.664749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.249 [2024-10-09 00:15:13.664805] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:f0f000f0 cdw11:f000f0f0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.249 [2024-10-09 00:15:13.664824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.249 [2024-10-09 00:15:13.664891] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:0f0f00f6 cdw11:0f000f0f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.249 [2024-10-09 00:15:13.664905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.249 #25 NEW cov: 12429 ft: 14927 corp: 14/229b lim: 35 exec/s: 0 rss: 74Mb L: 26/35 MS: 1 ChangeBinInt- 00:07:43.249 [2024-10-09 00:15:13.704570] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:43.249 [2024-10-09 00:15:13.705177] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff0000 cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.249 [2024-10-09 00:15:13.705207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.249 [2024-10-09 00:15:13.705268] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.249 [2024-10-09 00:15:13.705282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.249 [2024-10-09 00:15:13.705342] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.249 [2024-10-09 00:15:13.705356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.249 [2024-10-09 00:15:13.705415] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:fffe00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.249 [2024-10-09 00:15:13.705429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:43.249 [2024-10-09 00:15:13.705490] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.249 [2024-10-09 00:15:13.705505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:43.249 #26 NEW cov: 12429 ft: 14939 corp: 15/264b lim: 35 exec/s: 26 rss: 74Mb L: 35/35 MS: 1 ChangeBit- 00:07:43.249 [2024-10-09 00:15:13.764925] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0000008a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.249 [2024-10-09 00:15:13.764951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.249 [2024-10-09 00:15:13.765009] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ff7e00ff cdw11:0000000a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.249 [2024-10-09 00:15:13.765024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.249 #27 NEW cov: 12429 ft: 15149 corp: 16/279b lim: 35 exec/s: 27 rss: 74Mb L: 15/35 MS: 1 CrossOver- 00:07:43.249 [2024-10-09 00:15:13.824915] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:43.249 [2024-10-09 00:15:13.825535] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff0000 cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.249 [2024-10-09 00:15:13.825564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.249 [2024-10-09 00:15:13.825621] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.249 [2024-10-09 00:15:13.825636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.249 [2024-10-09 00:15:13.825691] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.249 [2024-10-09 00:15:13.825706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.249 [2024-10-09 00:15:13.825761] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:fffe00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.249 [2024-10-09 00:15:13.825776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:43.249 [2024-10-09 00:15:13.825830] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:ffff00fb cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.249 [2024-10-09 00:15:13.825844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:43.249 #28 NEW cov: 12429 ft: 15237 corp: 17/314b lim: 35 exec/s: 28 rss: 74Mb L: 35/35 MS: 1 ChangeBit- 00:07:43.509 [2024-10-09 00:15:13.885190] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:6e00008a cdw11:00000a00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.509 [2024-10-09 00:15:13.885216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.509 #29 NEW cov: 12429 ft: 15337 corp: 18/321b lim: 35 exec/s: 29 rss: 74Mb L: 7/35 MS: 1 ChangeByte- 00:07:43.509 [2024-10-09 00:15:13.925188] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:6e00008a cdw11:00000a00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.509 [2024-10-09 00:15:13.925213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.509 #30 NEW cov: 12429 ft: 15365 corp: 19/329b lim: 35 exec/s: 30 rss: 74Mb L: 8/35 MS: 1 InsertByte- 00:07:43.509 [2024-10-09 00:15:13.985406] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:1f00008a cdw11:00000a00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.509 [2024-10-09 00:15:13.985432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.509 #31 NEW cov: 12429 ft: 15387 corp: 20/336b lim: 35 exec/s: 31 rss: 74Mb L: 7/35 MS: 1 ChangeByte- 00:07:43.509 [2024-10-09 00:15:14.025483] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:6ef0008a cdw11:f000f0f0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.509 [2024-10-09 00:15:14.025508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.509 #32 NEW cov: 12429 ft: 15406 corp: 21/343b lim: 35 exec/s: 32 rss: 74Mb L: 7/35 MS: 1 CrossOver- 00:07:43.509 [2024-10-09 00:15:14.065899] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:f0f0000a cdw11:f000f0f0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.509 [2024-10-09 00:15:14.065924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.509 [2024-10-09 00:15:14.065982] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:f0f000f0 cdw11:f000f0e0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.509 [2024-10-09 00:15:14.065997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.509 [2024-10-09 00:15:14.066054] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:0f0f00f6 cdw11:0f000f0f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.509 [2024-10-09 00:15:14.066068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.509 #33 NEW cov: 12429 ft: 15416 corp: 22/369b lim: 35 exec/s: 33 rss: 75Mb L: 26/35 MS: 1 ChangeBit- 00:07:43.509 [2024-10-09 00:15:14.125847] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0500008a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.509 [2024-10-09 00:15:14.125875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.778 #34 NEW cov: 12429 ft: 15486 corp: 23/381b lim: 35 exec/s: 34 rss: 75Mb L: 12/35 MS: 1 ShuffleBytes- 00:07:43.778 [2024-10-09 00:15:14.165763] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:43.778 [2024-10-09 00:15:14.166216] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff0000 cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.778 [2024-10-09 00:15:14.166244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.778 [2024-10-09 00:15:14.166304] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.778 [2024-10-09 00:15:14.166319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.778 [2024-10-09 00:15:14.166376] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.778 [2024-10-09 00:15:14.166390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.778 #35 NEW cov: 12429 ft: 15553 corp: 24/404b lim: 35 exec/s: 35 rss: 75Mb L: 23/35 MS: 1 EraseBytes- 00:07:43.778 [2024-10-09 00:15:14.226109] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0500008a cdw11:0000003d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.778 [2024-10-09 00:15:14.226134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.778 #36 NEW cov: 12429 ft: 15589 corp: 25/416b lim: 35 exec/s: 36 rss: 75Mb L: 12/35 MS: 1 CopyPart- 00:07:43.778 [2024-10-09 00:15:14.286261] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:1f00008a cdw11:00000a00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.778 [2024-10-09 00:15:14.286287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.778 #37 NEW cov: 12429 ft: 15594 corp: 26/423b lim: 35 exec/s: 37 rss: 75Mb L: 7/35 MS: 1 ChangeByte- 00:07:43.778 [2024-10-09 00:15:14.346441] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:006e008a cdw11:00000a00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.778 [2024-10-09 00:15:14.346467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.778 #38 NEW cov: 12429 ft: 15640 corp: 27/430b lim: 35 exec/s: 38 rss: 75Mb L: 7/35 MS: 1 ShuffleBytes- 00:07:43.778 [2024-10-09 00:15:14.386935] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:f0f0000a cdw11:f000f0f0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.778 [2024-10-09 00:15:14.386961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.778 [2024-10-09 00:15:14.387017] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:f0f000f0 cdw11:f000f0e0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.778 [2024-10-09 00:15:14.387032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.778 [2024-10-09 00:15:14.387086] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:0f0f00f6 cdw11:0f000f0f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.778 [2024-10-09 00:15:14.387099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.778 [2024-10-09 00:15:14.387152] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.778 [2024-10-09 00:15:14.387165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:44.038 #39 NEW cov: 12429 ft: 15649 corp: 28/464b lim: 35 exec/s: 39 rss: 75Mb L: 34/35 MS: 1 InsertRepeatedBytes- 00:07:44.038 [2024-10-09 00:15:14.446990] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0af000fb cdw11:f000f0f0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.038 [2024-10-09 00:15:14.447016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.038 [2024-10-09 00:15:14.447090] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:f0f000f0 cdw11:f000f0f0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.038 [2024-10-09 00:15:14.447105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.038 [2024-10-09 00:15:14.447162] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:f60f00f0 cdw11:0f000f0f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.038 [2024-10-09 00:15:14.447176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.038 #40 NEW cov: 12429 ft: 15659 corp: 29/491b lim: 35 exec/s: 40 rss: 75Mb L: 27/35 MS: 1 InsertByte- 00:07:44.038 [2024-10-09 00:15:14.487028] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:44.038 [2024-10-09 00:15:14.487254] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:f0f0000a cdw11:f000f0f0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.038 [2024-10-09 00:15:14.487280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.038 [2024-10-09 00:15:14.487337] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:f0f000f0 cdw11:f000f0f0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.038 [2024-10-09 00:15:14.487355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.038 [2024-10-09 00:15:14.487413] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:0f0f00f6 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.038 [2024-10-09 00:15:14.487427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.038 [2024-10-09 00:15:14.487481] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:0f000f0f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.038 [2024-10-09 00:15:14.487497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:44.038 #41 NEW cov: 12429 ft: 15668 corp: 30/524b lim: 35 exec/s: 41 rss: 75Mb L: 33/35 MS: 1 InsertRepeatedBytes- 00:07:44.038 [2024-10-09 00:15:14.526954] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:8a00008a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.038 [2024-10-09 00:15:14.526979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.038 #44 NEW cov: 12429 ft: 15695 corp: 31/535b lim: 35 exec/s: 44 rss: 75Mb L: 11/35 MS: 3 EraseBytes-ChangeByte-CrossOver- 00:07:44.038 [2024-10-09 00:15:14.567089] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:6a00008a cdw11:00000a00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.038 [2024-10-09 00:15:14.567114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.038 #45 NEW cov: 12429 ft: 15720 corp: 32/542b lim: 35 exec/s: 45 rss: 75Mb L: 7/35 MS: 1 ChangeBinInt- 00:07:44.038 [2024-10-09 00:15:14.607188] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0500008a cdw11:3d00cc00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.038 [2024-10-09 00:15:14.607214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.038 #46 NEW cov: 12429 ft: 15774 corp: 33/555b lim: 35 exec/s: 46 rss: 75Mb L: 13/35 MS: 1 InsertByte- 00:07:44.038 [2024-10-09 00:15:14.667269] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:44.038 [2024-10-09 00:15:14.667701] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff0000 cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.038 [2024-10-09 00:15:14.667729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.038 [2024-10-09 00:15:14.667786] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.038 [2024-10-09 00:15:14.667801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.038 [2024-10-09 00:15:14.667863] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00fe cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.038 [2024-10-09 00:15:14.667878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.298 #47 NEW cov: 12429 ft: 15782 corp: 34/581b lim: 35 exec/s: 47 rss: 75Mb L: 26/35 MS: 1 EraseBytes- 00:07:44.298 [2024-10-09 00:15:14.707774] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:f0f0000a cdw11:f000f0f0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.298 [2024-10-09 00:15:14.707800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.298 [2024-10-09 00:15:14.707860] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:f0f000f0 cdw11:f000f0f0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.298 [2024-10-09 00:15:14.707878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.298 [2024-10-09 00:15:14.707934] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:0f0f00f6 cdw11:0f000f09 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.298 [2024-10-09 00:15:14.707948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.298 #48 NEW cov: 12429 ft: 15787 corp: 35/607b lim: 35 exec/s: 48 rss: 75Mb L: 26/35 MS: 1 ChangeByte- 00:07:44.298 [2024-10-09 00:15:14.748109] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:f0f0000a cdw11:f000f0f0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.298 [2024-10-09 00:15:14.748135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.298 [2024-10-09 00:15:14.748190] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:f0f000f0 cdw11:f000f0f0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.298 [2024-10-09 00:15:14.748206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.298 [2024-10-09 00:15:14.748260] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:0f0f00f6 cdw11:f6000ff0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.298 [2024-10-09 00:15:14.748275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.298 [2024-10-09 00:15:14.748328] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:0f0f000f cdw11:06000f0f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.298 [2024-10-09 00:15:14.748341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:44.298 [2024-10-09 00:15:14.748396] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:0f06000f cdw11:f000f0f0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.298 [2024-10-09 00:15:14.748410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:44.298 [2024-10-09 00:15:14.788218] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:f0f0004a cdw11:f000f0f0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.298 [2024-10-09 00:15:14.788243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.298 [2024-10-09 00:15:14.788299] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:f0f000f0 cdw11:f000f0f0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.298 [2024-10-09 00:15:14.788313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.298 [2024-10-09 00:15:14.788366] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:0f0f00f6 cdw11:f6000ff0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.298 [2024-10-09 00:15:14.788380] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.298 [2024-10-09 00:15:14.788434] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:0f0f000f cdw11:06000f0f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.298 [2024-10-09 00:15:14.788447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:44.298 [2024-10-09 00:15:14.788502] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:0f06000f cdw11:f000f0f0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.298 [2024-10-09 00:15:14.788515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:44.298 #50 NEW cov: 12429 ft: 15794 corp: 36/642b lim: 35 exec/s: 25 rss: 75Mb L: 35/35 MS: 2 CopyPart-ChangeBit- 00:07:44.298 #50 DONE cov: 12429 ft: 15794 corp: 36/642b lim: 35 exec/s: 25 rss: 75Mb 00:07:44.298 Done 50 runs in 2 second(s) 00:07:44.558 00:15:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_2.conf /var/tmp/suppress_nvmf_fuzz 00:07:44.558 00:15:14 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:44.558 00:15:14 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:44.558 00:15:14 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:07:44.558 00:15:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=3 00:07:44.558 00:15:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:44.558 00:15:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:44.558 00:15:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:07:44.558 00:15:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_3.conf 00:07:44.558 00:15:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:44.558 00:15:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:44.558 00:15:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 3 00:07:44.558 00:15:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4403 00:07:44.558 00:15:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:07:44.558 00:15:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' 00:07:44.558 00:15:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4403"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:44.558 00:15:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:44.558 00:15:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:44.558 00:15:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' -c /tmp/fuzz_json_3.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 -Z 3 00:07:44.558 [2024-10-09 00:15:14.992872] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:07:44.558 [2024-10-09 00:15:14.992946] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3886472 ] 00:07:44.817 [2024-10-09 00:15:15.196930] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.817 [2024-10-09 00:15:15.270200] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.817 [2024-10-09 00:15:15.329190] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:44.817 [2024-10-09 00:15:15.345448] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4403 *** 00:07:44.817 INFO: Running with entropic power schedule (0xFF, 100). 00:07:44.817 INFO: Seed: 3505042349 00:07:44.817 INFO: Loaded 1 modules (384346 inline 8-bit counters): 384346 [0x2bec84c, 0x2c4a5a6), 00:07:44.817 INFO: Loaded 1 PC tables (384346 PCs): 384346 [0x2c4a5a8,0x3227b48), 00:07:44.817 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:07:44.817 INFO: A corpus is not provided, starting from an empty corpus 00:07:44.817 #2 INITED exec/s: 0 rss: 66Mb 00:07:44.817 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:44.817 This may also happen if the target rejected all inputs we tried so far 00:07:45.076 NEW_FUNC[1/703]: 0x440c58 in fuzz_admin_abort_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:114 00:07:45.076 NEW_FUNC[2/703]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:45.076 #5 NEW cov: 12086 ft: 12081 corp: 2/8b lim: 20 exec/s: 0 rss: 73Mb L: 7/7 MS: 3 CrossOver-ChangeByte-InsertRepeatedBytes- 00:07:45.340 #9 NEW cov: 12199 ft: 12700 corp: 3/15b lim: 20 exec/s: 0 rss: 73Mb L: 7/7 MS: 4 CrossOver-CopyPart-ChangeByte-InsertRepeatedBytes- 00:07:45.340 #10 NEW cov: 12219 ft: 13397 corp: 4/25b lim: 20 exec/s: 0 rss: 73Mb L: 10/10 MS: 1 InsertRepeatedBytes- 00:07:45.340 #11 NEW cov: 12304 ft: 13611 corp: 5/31b lim: 20 exec/s: 0 rss: 73Mb L: 6/10 MS: 1 EraseBytes- 00:07:45.340 #12 NEW cov: 12308 ft: 13948 corp: 6/44b lim: 20 exec/s: 0 rss: 74Mb L: 13/13 MS: 1 CrossOver- 00:07:45.340 #16 NEW cov: 12308 ft: 14117 corp: 7/58b lim: 20 exec/s: 0 rss: 74Mb L: 14/14 MS: 4 ShuffleBytes-CrossOver-CrossOver-CrossOver- 00:07:45.598 #17 NEW cov: 12308 ft: 14211 corp: 8/65b lim: 20 exec/s: 0 rss: 74Mb L: 7/14 MS: 1 ShuffleBytes- 00:07:45.598 #18 NEW cov: 12308 ft: 14240 corp: 9/71b lim: 20 exec/s: 0 rss: 74Mb L: 6/14 MS: 1 CrossOver- 00:07:45.598 #19 NEW cov: 12308 ft: 14291 corp: 10/77b lim: 20 exec/s: 0 rss: 74Mb L: 6/14 MS: 1 ShuffleBytes- 00:07:45.598 #22 NEW cov: 12325 ft: 14519 corp: 11/95b lim: 20 exec/s: 0 rss: 74Mb L: 18/18 MS: 3 EraseBytes-ChangeByte-InsertRepeatedBytes- 00:07:45.598 #23 NEW cov: 12325 ft: 14540 corp: 12/114b lim: 20 exec/s: 0 rss: 74Mb L: 19/19 MS: 1 InsertByte- 00:07:45.857 [2024-10-09 00:15:16.243435] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:45.857 [2024-10-09 00:15:16.243479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.857 NEW_FUNC[1/20]: 0x1332ad8 in nvmf_qpair_abort_request /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:3477 00:07:45.857 NEW_FUNC[2/20]: 0x1333658 in nvmf_qpair_abort_aer /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:3419 00:07:45.857 #24 NEW cov: 12650 ft: 14911 corp: 13/129b lim: 20 exec/s: 0 rss: 74Mb L: 15/19 MS: 1 CMP- DE: "\000\000\000\000\000\000\000\000"- 00:07:45.857 #25 NEW cov: 12650 ft: 14936 corp: 14/147b lim: 20 exec/s: 0 rss: 74Mb L: 18/19 MS: 1 CrossOver- 00:07:45.857 [2024-10-09 00:15:16.333778] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:45.857 [2024-10-09 00:15:16.333808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.857 #26 NEW cov: 12650 ft: 15062 corp: 15/165b lim: 20 exec/s: 0 rss: 74Mb L: 18/19 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\000"- 00:07:45.857 #27 NEW cov: 12650 ft: 15093 corp: 16/172b lim: 20 exec/s: 27 rss: 74Mb L: 7/19 MS: 1 ChangeBit- 00:07:45.857 #32 NEW cov: 12650 ft: 15095 corp: 17/181b lim: 20 exec/s: 32 rss: 74Mb L: 9/19 MS: 5 ShuffleBytes-ChangeBit-InsertByte-InsertByte-InsertRepeatedBytes- 00:07:45.857 [2024-10-09 00:15:16.473945] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:45.857 [2024-10-09 00:15:16.473973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.116 #33 NEW cov: 12650 ft: 15125 corp: 18/192b lim: 20 exec/s: 33 rss: 74Mb L: 11/19 MS: 1 EraseBytes- 00:07:46.116 #34 NEW cov: 12650 ft: 15134 corp: 19/204b lim: 20 exec/s: 34 rss: 74Mb L: 12/19 MS: 1 InsertRepeatedBytes- 00:07:46.116 #35 NEW cov: 12650 ft: 15168 corp: 20/219b lim: 20 exec/s: 35 rss: 74Mb L: 15/19 MS: 1 CrossOver- 00:07:46.116 #36 NEW cov: 12653 ft: 15227 corp: 21/231b lim: 20 exec/s: 36 rss: 74Mb L: 12/19 MS: 1 CMP- DE: "\001\000\000\000\002P\306M"- 00:07:46.116 #37 NEW cov: 12653 ft: 15243 corp: 22/238b lim: 20 exec/s: 37 rss: 74Mb L: 7/19 MS: 1 ChangeByte- 00:07:46.116 [2024-10-09 00:15:16.714811] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:46.116 [2024-10-09 00:15:16.714843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.116 #38 NEW cov: 12653 ft: 15264 corp: 23/256b lim: 20 exec/s: 38 rss: 74Mb L: 18/19 MS: 1 ChangeByte- 00:07:46.375 [2024-10-09 00:15:16.754748] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:46.375 [2024-10-09 00:15:16.754775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.375 #39 NEW cov: 12653 ft: 15312 corp: 24/267b lim: 20 exec/s: 39 rss: 74Mb L: 11/19 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\000"- 00:07:46.375 #40 NEW cov: 12653 ft: 15382 corp: 25/274b lim: 20 exec/s: 40 rss: 74Mb L: 7/19 MS: 1 ChangeBinInt- 00:07:46.375 #41 NEW cov: 12653 ft: 15401 corp: 26/289b lim: 20 exec/s: 41 rss: 74Mb L: 15/19 MS: 1 CopyPart- 00:07:46.375 #42 NEW cov: 12653 ft: 15415 corp: 27/305b lim: 20 exec/s: 42 rss: 75Mb L: 16/19 MS: 1 CopyPart- 00:07:46.375 #43 NEW cov: 12653 ft: 15432 corp: 28/311b lim: 20 exec/s: 43 rss: 75Mb L: 6/19 MS: 1 ChangeBit- 00:07:46.634 #46 NEW cov: 12653 ft: 15448 corp: 29/316b lim: 20 exec/s: 46 rss: 75Mb L: 5/19 MS: 3 InsertByte-ShuffleBytes-CrossOver- 00:07:46.634 #47 NEW cov: 12653 ft: 15468 corp: 30/324b lim: 20 exec/s: 47 rss: 75Mb L: 8/19 MS: 1 InsertByte- 00:07:46.634 #48 NEW cov: 12653 ft: 15523 corp: 31/344b lim: 20 exec/s: 48 rss: 75Mb L: 20/20 MS: 1 CopyPart- 00:07:46.634 #49 NEW cov: 12653 ft: 15549 corp: 32/349b lim: 20 exec/s: 49 rss: 75Mb L: 5/20 MS: 1 EraseBytes- 00:07:46.634 [2024-10-09 00:15:17.216306] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:46.634 [2024-10-09 00:15:17.216333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.634 #50 NEW cov: 12653 ft: 15555 corp: 33/368b lim: 20 exec/s: 50 rss: 75Mb L: 19/20 MS: 1 InsertByte- 00:07:46.634 [2024-10-09 00:15:17.256173] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:46.634 [2024-10-09 00:15:17.256200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.894 NEW_FUNC[1/1]: 0x1bfc8d8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:658 00:07:46.894 #51 NEW cov: 12676 ft: 15590 corp: 34/376b lim: 20 exec/s: 51 rss: 75Mb L: 8/20 MS: 1 CopyPart- 00:07:46.894 #52 NEW cov: 12676 ft: 15602 corp: 35/383b lim: 20 exec/s: 52 rss: 75Mb L: 7/20 MS: 1 ChangeBit- 00:07:46.894 #53 NEW cov: 12676 ft: 15617 corp: 36/397b lim: 20 exec/s: 53 rss: 75Mb L: 14/20 MS: 1 PersAutoDict- DE: "\001\000\000\000\002P\306M"- 00:07:46.894 [2024-10-09 00:15:17.396804] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:46.894 [2024-10-09 00:15:17.396834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.894 #54 NEW cov: 12676 ft: 15632 corp: 37/415b lim: 20 exec/s: 27 rss: 75Mb L: 18/20 MS: 1 ShuffleBytes- 00:07:46.894 #54 DONE cov: 12676 ft: 15632 corp: 37/415b lim: 20 exec/s: 27 rss: 75Mb 00:07:46.894 ###### Recommended dictionary. ###### 00:07:46.894 "\000\000\000\000\000\000\000\000" # Uses: 2 00:07:46.894 "\001\000\000\000\002P\306M" # Uses: 1 00:07:46.894 ###### End of recommended dictionary. ###### 00:07:46.894 Done 54 runs in 2 second(s) 00:07:47.153 00:15:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_3.conf /var/tmp/suppress_nvmf_fuzz 00:07:47.153 00:15:17 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:47.153 00:15:17 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:47.153 00:15:17 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:07:47.153 00:15:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=4 00:07:47.153 00:15:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:47.153 00:15:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:47.153 00:15:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:07:47.153 00:15:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_4.conf 00:07:47.153 00:15:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:47.153 00:15:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:47.153 00:15:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 4 00:07:47.153 00:15:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4404 00:07:47.153 00:15:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:07:47.153 00:15:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' 00:07:47.153 00:15:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4404"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:47.153 00:15:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:47.153 00:15:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:47.153 00:15:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' -c /tmp/fuzz_json_4.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 -Z 4 00:07:47.153 [2024-10-09 00:15:17.599223] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:07:47.153 [2024-10-09 00:15:17.599291] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3886778 ] 00:07:47.413 [2024-10-09 00:15:17.802133] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.413 [2024-10-09 00:15:17.875710] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.413 [2024-10-09 00:15:17.935091] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:47.413 [2024-10-09 00:15:17.951331] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4404 *** 00:07:47.413 INFO: Running with entropic power schedule (0xFF, 100). 00:07:47.413 INFO: Seed: 1817055188 00:07:47.413 INFO: Loaded 1 modules (384346 inline 8-bit counters): 384346 [0x2bec84c, 0x2c4a5a6), 00:07:47.413 INFO: Loaded 1 PC tables (384346 PCs): 384346 [0x2c4a5a8,0x3227b48), 00:07:47.413 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:07:47.413 INFO: A corpus is not provided, starting from an empty corpus 00:07:47.413 #2 INITED exec/s: 0 rss: 67Mb 00:07:47.413 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:47.413 This may also happen if the target rejected all inputs we tried so far 00:07:47.413 [2024-10-09 00:15:18.007080] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.413 [2024-10-09 00:15:18.007109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.413 [2024-10-09 00:15:18.007164] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.413 [2024-10-09 00:15:18.007178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.413 [2024-10-09 00:15:18.007228] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.413 [2024-10-09 00:15:18.007241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:47.932 NEW_FUNC[1/715]: 0x441d58 in fuzz_admin_create_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:126 00:07:47.932 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:47.932 #28 NEW cov: 12198 ft: 12211 corp: 2/27b lim: 35 exec/s: 0 rss: 74Mb L: 26/26 MS: 1 InsertRepeatedBytes- 00:07:47.932 [2024-10-09 00:15:18.347904] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.932 [2024-10-09 00:15:18.347940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.932 [2024-10-09 00:15:18.347996] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.932 [2024-10-09 00:15:18.348010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.932 [2024-10-09 00:15:18.348061] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:8e8e8e8e cdw11:0a8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.932 [2024-10-09 00:15:18.348074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:47.932 #34 NEW cov: 12325 ft: 12717 corp: 3/53b lim: 35 exec/s: 0 rss: 74Mb L: 26/26 MS: 1 CopyPart- 00:07:47.932 [2024-10-09 00:15:18.408013] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.932 [2024-10-09 00:15:18.408040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.932 [2024-10-09 00:15:18.408094] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.932 [2024-10-09 00:15:18.408108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.933 [2024-10-09 00:15:18.408163] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:8e8e8e8e cdw11:0a8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.933 [2024-10-09 00:15:18.408177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:47.933 #35 NEW cov: 12331 ft: 13023 corp: 4/79b lim: 35 exec/s: 0 rss: 74Mb L: 26/26 MS: 1 ChangeBit- 00:07:47.933 [2024-10-09 00:15:18.468277] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.933 [2024-10-09 00:15:18.468305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.933 [2024-10-09 00:15:18.468358] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.933 [2024-10-09 00:15:18.468372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.933 [2024-10-09 00:15:18.468424] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:8e8e8e8e cdw11:12120000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.933 [2024-10-09 00:15:18.468437] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:47.933 [2024-10-09 00:15:18.468489] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:12121212 cdw11:120a0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.933 [2024-10-09 00:15:18.468502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:47.933 #41 NEW cov: 12416 ft: 13545 corp: 5/113b lim: 35 exec/s: 0 rss: 74Mb L: 34/34 MS: 1 InsertRepeatedBytes- 00:07:47.933 [2024-10-09 00:15:18.528309] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:8e8e8e8e cdw11:8ece0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.933 [2024-10-09 00:15:18.528338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.933 [2024-10-09 00:15:18.528392] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.933 [2024-10-09 00:15:18.528407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.933 [2024-10-09 00:15:18.528459] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.933 [2024-10-09 00:15:18.528473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:47.933 #42 NEW cov: 12416 ft: 13698 corp: 6/139b lim: 35 exec/s: 0 rss: 74Mb L: 26/34 MS: 1 ChangeBit- 00:07:48.192 [2024-10-09 00:15:18.568439] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.192 [2024-10-09 00:15:18.568466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.192 [2024-10-09 00:15:18.568523] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:10008e8e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.192 [2024-10-09 00:15:18.568538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.192 [2024-10-09 00:15:18.568592] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:008e0000 cdw11:0a8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.192 [2024-10-09 00:15:18.568606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:48.192 #43 NEW cov: 12416 ft: 13777 corp: 7/165b lim: 35 exec/s: 0 rss: 74Mb L: 26/34 MS: 1 CMP- DE: "\020\000\000\000\000\000\000\000"- 00:07:48.192 [2024-10-09 00:15:18.608645] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.192 [2024-10-09 00:15:18.608669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.192 [2024-10-09 00:15:18.608724] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.192 [2024-10-09 00:15:18.608738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.192 [2024-10-09 00:15:18.608792] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:8e8e8e8e cdw11:12120000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.192 [2024-10-09 00:15:18.608805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:48.192 [2024-10-09 00:15:18.608886] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:12121212 cdw11:12220000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.192 [2024-10-09 00:15:18.608900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:48.192 #44 NEW cov: 12416 ft: 13862 corp: 8/199b lim: 35 exec/s: 0 rss: 74Mb L: 34/34 MS: 1 ChangeBinInt- 00:07:48.192 [2024-10-09 00:15:18.668685] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.192 [2024-10-09 00:15:18.668710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.192 [2024-10-09 00:15:18.668766] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000100 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.192 [2024-10-09 00:15:18.668783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.192 [2024-10-09 00:15:18.668838] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:8e8e008e cdw11:0a8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.192 [2024-10-09 00:15:18.668853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:48.192 #45 NEW cov: 12416 ft: 13926 corp: 9/225b lim: 35 exec/s: 0 rss: 74Mb L: 26/34 MS: 1 CMP- DE: "\001\000\000\000\000\000\000\000"- 00:07:48.192 [2024-10-09 00:15:18.708802] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.192 [2024-10-09 00:15:18.708840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.192 [2024-10-09 00:15:18.708896] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000100 cdw11:da000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.192 [2024-10-09 00:15:18.708911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.192 [2024-10-09 00:15:18.708964] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:8e8e0000 cdw11:8e0a0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.192 [2024-10-09 00:15:18.708978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:48.192 #46 NEW cov: 12416 ft: 13947 corp: 10/252b lim: 35 exec/s: 0 rss: 74Mb L: 27/34 MS: 1 InsertByte- 00:07:48.192 [2024-10-09 00:15:18.769247] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.192 [2024-10-09 00:15:18.769272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.192 [2024-10-09 00:15:18.769327] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.192 [2024-10-09 00:15:18.769341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.192 [2024-10-09 00:15:18.769393] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:8e8e8c8e cdw11:8e120000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.192 [2024-10-09 00:15:18.769407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:48.192 [2024-10-09 00:15:18.769457] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:12121212 cdw11:12120000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.192 [2024-10-09 00:15:18.769471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:48.192 [2024-10-09 00:15:18.769523] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.192 [2024-10-09 00:15:18.769537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:48.192 #47 NEW cov: 12416 ft: 14098 corp: 11/287b lim: 35 exec/s: 0 rss: 74Mb L: 35/35 MS: 1 CrossOver- 00:07:48.192 [2024-10-09 00:15:18.808761] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.192 [2024-10-09 00:15:18.808786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.450 #48 NEW cov: 12416 ft: 14842 corp: 12/300b lim: 35 exec/s: 0 rss: 74Mb L: 13/35 MS: 1 EraseBytes- 00:07:48.450 [2024-10-09 00:15:18.849228] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:8e8e8e8e cdw11:8ece0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.451 [2024-10-09 00:15:18.849253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.451 [2024-10-09 00:15:18.849308] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.451 [2024-10-09 00:15:18.849322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.451 [2024-10-09 00:15:18.849375] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.451 [2024-10-09 00:15:18.849389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:48.451 NEW_FUNC[1/1]: 0x1bfc8d8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:658 00:07:48.451 #49 NEW cov: 12439 ft: 14888 corp: 13/323b lim: 35 exec/s: 0 rss: 75Mb L: 23/35 MS: 1 EraseBytes- 00:07:48.451 [2024-10-09 00:15:18.909576] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:8e8e988e cdw11:8e8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.451 [2024-10-09 00:15:18.909602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.451 [2024-10-09 00:15:18.909658] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.451 [2024-10-09 00:15:18.909675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.451 [2024-10-09 00:15:18.909728] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:8e8e8e8e cdw11:12120000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.451 [2024-10-09 00:15:18.909742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:48.451 [2024-10-09 00:15:18.909796] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:12121212 cdw11:12220000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.451 [2024-10-09 00:15:18.909829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:48.451 #50 NEW cov: 12439 ft: 14895 corp: 14/357b lim: 35 exec/s: 0 rss: 75Mb L: 34/35 MS: 1 ChangeBinInt- 00:07:48.451 [2024-10-09 00:15:18.969551] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:8e8e8e8e cdw11:8ece0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.451 [2024-10-09 00:15:18.969577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.451 [2024-10-09 00:15:18.969632] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:8e8e8e8e cdw11:01000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.451 [2024-10-09 00:15:18.969646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.451 [2024-10-09 00:15:18.969698] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:008e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.451 [2024-10-09 00:15:18.969712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:48.451 #51 NEW cov: 12439 ft: 14920 corp: 15/380b lim: 35 exec/s: 51 rss: 75Mb L: 23/35 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\000"- 00:07:48.451 [2024-10-09 00:15:19.029684] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.451 [2024-10-09 00:15:19.029713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.451 [2024-10-09 00:15:19.029770] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.451 [2024-10-09 00:15:19.029783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.451 [2024-10-09 00:15:19.029831] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:8e8e8e8e cdw11:0a8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.451 [2024-10-09 00:15:19.029846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:48.451 #52 NEW cov: 12439 ft: 14946 corp: 16/407b lim: 35 exec/s: 52 rss: 75Mb L: 27/35 MS: 1 CrossOver- 00:07:48.451 [2024-10-09 00:15:19.069837] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.451 [2024-10-09 00:15:19.069861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.451 [2024-10-09 00:15:19.069916] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.451 [2024-10-09 00:15:19.069930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.451 [2024-10-09 00:15:19.069982] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:8e8e8e8e cdw11:0a8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.451 [2024-10-09 00:15:19.069996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:48.709 #53 NEW cov: 12439 ft: 15010 corp: 17/434b lim: 35 exec/s: 53 rss: 75Mb L: 27/35 MS: 1 ShuffleBytes- 00:07:48.709 [2024-10-09 00:15:19.129860] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00008e10 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.709 [2024-10-09 00:15:19.129886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.709 [2024-10-09 00:15:19.129942] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:8e0a0000 cdw11:8e8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.709 [2024-10-09 00:15:19.129956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.709 #54 NEW cov: 12439 ft: 15218 corp: 18/452b lim: 35 exec/s: 54 rss: 75Mb L: 18/35 MS: 1 EraseBytes- 00:07:48.709 [2024-10-09 00:15:19.190179] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.709 [2024-10-09 00:15:19.190204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.709 [2024-10-09 00:15:19.190275] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:10008e8e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.709 [2024-10-09 00:15:19.190289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.710 [2024-10-09 00:15:19.190345] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:008e0000 cdw11:0a8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.710 [2024-10-09 00:15:19.190359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:48.710 #55 NEW cov: 12439 ft: 15235 corp: 19/478b lim: 35 exec/s: 55 rss: 75Mb L: 26/35 MS: 1 ShuffleBytes- 00:07:48.710 [2024-10-09 00:15:19.230413] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.710 [2024-10-09 00:15:19.230442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.710 [2024-10-09 00:15:19.230496] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00008e10 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.710 [2024-10-09 00:15:19.230510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.710 [2024-10-09 00:15:19.230564] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:8e100000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.710 [2024-10-09 00:15:19.230578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:48.710 [2024-10-09 00:15:19.230629] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:8e0a0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.710 [2024-10-09 00:15:19.230643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:48.710 #56 NEW cov: 12439 ft: 15253 corp: 20/512b lim: 35 exec/s: 56 rss: 75Mb L: 34/35 MS: 1 PersAutoDict- DE: "\020\000\000\000\000\000\000\000"- 00:07:48.710 [2024-10-09 00:15:19.290133] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:8e8e8e8e cdw11:8ece0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.710 [2024-10-09 00:15:19.290158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.710 #57 NEW cov: 12439 ft: 15275 corp: 21/525b lim: 35 exec/s: 57 rss: 75Mb L: 13/35 MS: 1 EraseBytes- 00:07:48.710 [2024-10-09 00:15:19.330537] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:8e8e8e8e cdw11:0e8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.710 [2024-10-09 00:15:19.330561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.710 [2024-10-09 00:15:19.330615] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.710 [2024-10-09 00:15:19.330629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.710 [2024-10-09 00:15:19.330681] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:8e8e8e8e cdw11:0a8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.710 [2024-10-09 00:15:19.330695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:48.969 #58 NEW cov: 12439 ft: 15291 corp: 22/552b lim: 35 exec/s: 58 rss: 75Mb L: 27/35 MS: 1 ChangeBit- 00:07:48.969 [2024-10-09 00:15:19.370987] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.969 [2024-10-09 00:15:19.371011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.969 [2024-10-09 00:15:19.371067] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.969 [2024-10-09 00:15:19.371081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.969 [2024-10-09 00:15:19.371132] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:8e8e8c8e cdw11:8e120000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.969 [2024-10-09 00:15:19.371146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:48.969 [2024-10-09 00:15:19.371201] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:12121212 cdw11:12120000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.969 [2024-10-09 00:15:19.371215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:48.969 [2024-10-09 00:15:19.371268] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:8e8e8e8e cdw11:0e8e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.969 [2024-10-09 00:15:19.371282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:48.969 #59 NEW cov: 12439 ft: 15320 corp: 23/587b lim: 35 exec/s: 59 rss: 75Mb L: 35/35 MS: 1 ChangeBit- 00:07:48.969 [2024-10-09 00:15:19.430978] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:8e8e98c8 cdw11:8e8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.969 [2024-10-09 00:15:19.431002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.970 [2024-10-09 00:15:19.431055] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.970 [2024-10-09 00:15:19.431069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.970 [2024-10-09 00:15:19.431123] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:8e8e8e8e cdw11:12120000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.970 [2024-10-09 00:15:19.431137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:48.970 [2024-10-09 00:15:19.431188] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:12121212 cdw11:12220000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.970 [2024-10-09 00:15:19.431201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:48.970 #60 NEW cov: 12439 ft: 15367 corp: 24/621b lim: 35 exec/s: 60 rss: 75Mb L: 34/35 MS: 1 ChangeByte- 00:07:48.970 [2024-10-09 00:15:19.491014] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:8e8e8e8e cdw11:8ece0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.970 [2024-10-09 00:15:19.491039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.970 [2024-10-09 00:15:19.491093] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.970 [2024-10-09 00:15:19.491107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.970 [2024-10-09 00:15:19.491162] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.970 [2024-10-09 00:15:19.491175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:48.970 #61 NEW cov: 12439 ft: 15377 corp: 25/647b lim: 35 exec/s: 61 rss: 75Mb L: 26/35 MS: 1 CopyPart- 00:07:48.970 [2024-10-09 00:15:19.531165] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.970 [2024-10-09 00:15:19.531190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.970 [2024-10-09 00:15:19.531245] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ae8e8e8e cdw11:8e8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.970 [2024-10-09 00:15:19.531259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.970 [2024-10-09 00:15:19.531314] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:8e8e8e8e cdw11:0a8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.970 [2024-10-09 00:15:19.531328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:48.970 #62 NEW cov: 12439 ft: 15384 corp: 26/673b lim: 35 exec/s: 62 rss: 75Mb L: 26/35 MS: 1 ChangeBit- 00:07:48.970 [2024-10-09 00:15:19.571391] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.970 [2024-10-09 00:15:19.571415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.970 [2024-10-09 00:15:19.571471] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00008e10 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.970 [2024-10-09 00:15:19.571484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.970 [2024-10-09 00:15:19.571539] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:8e100000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.970 [2024-10-09 00:15:19.571552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:48.970 [2024-10-09 00:15:19.571604] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000001 cdw11:8e0a0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.970 [2024-10-09 00:15:19.571618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:49.229 #63 NEW cov: 12439 ft: 15387 corp: 27/707b lim: 35 exec/s: 63 rss: 75Mb L: 34/35 MS: 1 ChangeBit- 00:07:49.229 [2024-10-09 00:15:19.631381] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:8e8e8e8e cdw11:8ece0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.229 [2024-10-09 00:15:19.631406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.229 [2024-10-09 00:15:19.631462] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.229 [2024-10-09 00:15:19.631475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.229 [2024-10-09 00:15:19.631527] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:8e8e8e8e cdw11:8e290001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.229 [2024-10-09 00:15:19.631541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:49.229 #64 NEW cov: 12439 ft: 15441 corp: 28/734b lim: 35 exec/s: 64 rss: 75Mb L: 27/35 MS: 1 InsertByte- 00:07:49.229 [2024-10-09 00:15:19.691697] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.229 [2024-10-09 00:15:19.691721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.229 [2024-10-09 00:15:19.691776] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:10008e8e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.229 [2024-10-09 00:15:19.691789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.229 [2024-10-09 00:15:19.691842] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:008e0000 cdw11:0a8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.229 [2024-10-09 00:15:19.691857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:49.229 [2024-10-09 00:15:19.691908] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:8e8e8e8e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.229 [2024-10-09 00:15:19.691925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:49.229 #65 NEW cov: 12439 ft: 15451 corp: 29/762b lim: 35 exec/s: 65 rss: 75Mb L: 28/35 MS: 1 CopyPart- 00:07:49.229 [2024-10-09 00:15:19.731536] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:8e8e8ede cdw11:8e8e0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.229 [2024-10-09 00:15:19.731562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.229 [2024-10-09 00:15:19.731615] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.229 [2024-10-09 00:15:19.731629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.229 #66 NEW cov: 12439 ft: 15471 corp: 30/776b lim: 35 exec/s: 66 rss: 75Mb L: 14/35 MS: 1 InsertByte- 00:07:49.229 [2024-10-09 00:15:19.791885] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:8e8e8e8e cdw11:8ece0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.230 [2024-10-09 00:15:19.791910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.230 [2024-10-09 00:15:19.791965] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.230 [2024-10-09 00:15:19.791978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.230 [2024-10-09 00:15:19.792029] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:8e8e8e8c cdw11:8e8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.230 [2024-10-09 00:15:19.792043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:49.230 #67 NEW cov: 12439 ft: 15480 corp: 31/802b lim: 35 exec/s: 67 rss: 75Mb L: 26/35 MS: 1 ChangeBit- 00:07:49.230 [2024-10-09 00:15:19.832174] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.230 [2024-10-09 00:15:19.832198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.230 [2024-10-09 00:15:19.832252] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.230 [2024-10-09 00:15:19.832266] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.230 [2024-10-09 00:15:19.832317] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:8e8e8e8e cdw11:1a120000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.230 [2024-10-09 00:15:19.832331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:49.230 [2024-10-09 00:15:19.832383] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:12121212 cdw11:12220000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.230 [2024-10-09 00:15:19.832397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:49.230 #68 NEW cov: 12439 ft: 15499 corp: 32/836b lim: 35 exec/s: 68 rss: 75Mb L: 34/35 MS: 1 ChangeBit- 00:07:49.489 [2024-10-09 00:15:19.872273] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.489 [2024-10-09 00:15:19.872298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.489 [2024-10-09 00:15:19.872355] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:8e8e8e8e cdw11:8e8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.489 [2024-10-09 00:15:19.872369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.489 [2024-10-09 00:15:19.872421] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:8e8e8e0b cdw11:12120000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.489 [2024-10-09 00:15:19.872435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:49.489 [2024-10-09 00:15:19.872488] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:12121212 cdw11:12220000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.489 [2024-10-09 00:15:19.872502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:49.489 #69 NEW cov: 12439 ft: 15534 corp: 33/870b lim: 35 exec/s: 69 rss: 75Mb L: 34/35 MS: 1 ChangeByte- 00:07:49.489 [2024-10-09 00:15:19.912045] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00008e10 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.489 [2024-10-09 00:15:19.912069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.489 [2024-10-09 00:15:19.912120] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:8e0a0000 cdw11:8e8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.489 [2024-10-09 00:15:19.912134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.489 [2024-10-09 00:15:19.972056] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00008e10 cdw11:8e8e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.489 [2024-10-09 00:15:19.972080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.489 #71 NEW cov: 12439 ft: 15542 corp: 34/881b lim: 35 exec/s: 35 rss: 75Mb L: 11/35 MS: 2 ShuffleBytes-EraseBytes- 00:07:49.489 #71 DONE cov: 12439 ft: 15542 corp: 34/881b lim: 35 exec/s: 35 rss: 75Mb 00:07:49.489 ###### Recommended dictionary. ###### 00:07:49.489 "\020\000\000\000\000\000\000\000" # Uses: 1 00:07:49.489 "\001\000\000\000\000\000\000\000" # Uses: 1 00:07:49.489 ###### End of recommended dictionary. ###### 00:07:49.489 Done 71 runs in 2 second(s) 00:07:49.749 00:15:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_4.conf /var/tmp/suppress_nvmf_fuzz 00:07:49.749 00:15:20 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:49.749 00:15:20 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:49.749 00:15:20 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:07:49.749 00:15:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=5 00:07:49.749 00:15:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:49.749 00:15:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:49.749 00:15:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:07:49.749 00:15:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_5.conf 00:07:49.749 00:15:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:49.749 00:15:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:49.749 00:15:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 5 00:07:49.749 00:15:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4405 00:07:49.749 00:15:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:07:49.749 00:15:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' 00:07:49.749 00:15:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4405"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:49.749 00:15:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:49.749 00:15:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:49.749 00:15:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' -c /tmp/fuzz_json_5.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 -Z 5 00:07:49.749 [2024-10-09 00:15:20.180712] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:07:49.749 [2024-10-09 00:15:20.180798] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3887132 ] 00:07:50.009 [2024-10-09 00:15:20.385280] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.009 [2024-10-09 00:15:20.460705] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.009 [2024-10-09 00:15:20.520384] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:50.009 [2024-10-09 00:15:20.536627] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4405 *** 00:07:50.009 INFO: Running with entropic power schedule (0xFF, 100). 00:07:50.009 INFO: Seed: 105107458 00:07:50.009 INFO: Loaded 1 modules (384346 inline 8-bit counters): 384346 [0x2bec84c, 0x2c4a5a6), 00:07:50.009 INFO: Loaded 1 PC tables (384346 PCs): 384346 [0x2c4a5a8,0x3227b48), 00:07:50.009 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:07:50.009 INFO: A corpus is not provided, starting from an empty corpus 00:07:50.009 #2 INITED exec/s: 0 rss: 66Mb 00:07:50.009 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:50.009 This may also happen if the target rejected all inputs we tried so far 00:07:50.009 [2024-10-09 00:15:20.596010] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:eeee0aee cdw11:eeee0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.009 [2024-10-09 00:15:20.596052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.009 [2024-10-09 00:15:20.596108] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:eeeeeeee cdw11:eeee0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.009 [2024-10-09 00:15:20.596122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.009 [2024-10-09 00:15:20.596175] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:eeeeeeee cdw11:eeee0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.009 [2024-10-09 00:15:20.596188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.009 [2024-10-09 00:15:20.596241] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:eeeeeeee cdw11:eeee0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.009 [2024-10-09 00:15:20.596254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:50.577 NEW_FUNC[1/715]: 0x443ef8 in fuzz_admin_create_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:142 00:07:50.577 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:50.577 #8 NEW cov: 12223 ft: 12224 corp: 2/38b lim: 45 exec/s: 0 rss: 73Mb L: 37/37 MS: 1 InsertRepeatedBytes- 00:07:50.577 [2024-10-09 00:15:20.927441] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:eeee0aee cdw11:eeee0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.577 [2024-10-09 00:15:20.927537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.577 [2024-10-09 00:15:20.927657] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:eeeeeeee cdw11:eeee0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.577 [2024-10-09 00:15:20.927697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.577 [2024-10-09 00:15:20.927809] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:eeeeeeee cdw11:eeee0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.577 [2024-10-09 00:15:20.927860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.577 [2024-10-09 00:15:20.927971] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:eeeeeeee cdw11:eeee0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.577 [2024-10-09 00:15:20.928010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:50.577 #19 NEW cov: 12336 ft: 12959 corp: 3/75b lim: 45 exec/s: 0 rss: 74Mb L: 37/37 MS: 1 ChangeBit- 00:07:50.577 [2024-10-09 00:15:20.996757] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.577 [2024-10-09 00:15:20.996784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.577 [2024-10-09 00:15:20.996842] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.577 [2024-10-09 00:15:20.996856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.577 [2024-10-09 00:15:20.996910] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.577 [2024-10-09 00:15:20.996923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.577 #23 NEW cov: 12342 ft: 13619 corp: 4/105b lim: 45 exec/s: 0 rss: 74Mb L: 30/37 MS: 4 CopyPart-ShuffleBytes-EraseBytes-InsertRepeatedBytes- 00:07:50.577 [2024-10-09 00:15:21.036807] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.577 [2024-10-09 00:15:21.036836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.577 [2024-10-09 00:15:21.036892] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.577 [2024-10-09 00:15:21.036906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.577 [2024-10-09 00:15:21.036960] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.577 [2024-10-09 00:15:21.036974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.577 #24 NEW cov: 12427 ft: 13892 corp: 5/135b lim: 45 exec/s: 0 rss: 74Mb L: 30/37 MS: 1 CopyPart- 00:07:50.577 [2024-10-09 00:15:21.097156] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:eeee0aee cdw11:eeee0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.577 [2024-10-09 00:15:21.097181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.577 [2024-10-09 00:15:21.097238] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:eeeeeeee cdw11:eeee0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.577 [2024-10-09 00:15:21.097252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.577 [2024-10-09 00:15:21.097307] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:eeeeeeee cdw11:eeee0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.577 [2024-10-09 00:15:21.097321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.577 [2024-10-09 00:15:21.097375] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:eeeeeeee cdw11:eeee0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.577 [2024-10-09 00:15:21.097388] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:50.577 #25 NEW cov: 12427 ft: 13979 corp: 6/172b lim: 45 exec/s: 0 rss: 74Mb L: 37/37 MS: 1 ShuffleBytes- 00:07:50.577 [2024-10-09 00:15:21.137265] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.577 [2024-10-09 00:15:21.137290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.577 [2024-10-09 00:15:21.137346] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.577 [2024-10-09 00:15:21.137360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.577 [2024-10-09 00:15:21.137416] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.577 [2024-10-09 00:15:21.137429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.577 [2024-10-09 00:15:21.137484] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.577 [2024-10-09 00:15:21.137496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:50.577 #26 NEW cov: 12427 ft: 14069 corp: 7/211b lim: 45 exec/s: 0 rss: 74Mb L: 39/39 MS: 1 CopyPart- 00:07:50.577 [2024-10-09 00:15:21.197095] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:eeee0aee cdw11:eeee0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.577 [2024-10-09 00:15:21.197120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.577 [2024-10-09 00:15:21.197175] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:eeeeeeee cdw11:eeee0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.577 [2024-10-09 00:15:21.197189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.847 #27 NEW cov: 12427 ft: 14418 corp: 8/233b lim: 45 exec/s: 0 rss: 74Mb L: 22/39 MS: 1 EraseBytes- 00:07:50.847 [2024-10-09 00:15:21.257571] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:33333333 cdw11:33330001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.847 [2024-10-09 00:15:21.257596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.847 [2024-10-09 00:15:21.257652] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:33333333 cdw11:33330001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.847 [2024-10-09 00:15:21.257666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.847 [2024-10-09 00:15:21.257721] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:33333333 cdw11:33330001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.847 [2024-10-09 00:15:21.257738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.847 [2024-10-09 00:15:21.257792] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:33333333 cdw11:33330001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.847 [2024-10-09 00:15:21.257805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:50.847 #28 NEW cov: 12427 ft: 14577 corp: 9/273b lim: 45 exec/s: 0 rss: 74Mb L: 40/40 MS: 1 InsertRepeatedBytes- 00:07:50.847 [2024-10-09 00:15:21.297681] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:eeee0aee cdw11:eeee0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.847 [2024-10-09 00:15:21.297705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.847 [2024-10-09 00:15:21.297777] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:eeeeeeee cdw11:eeee0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.847 [2024-10-09 00:15:21.297791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.847 [2024-10-09 00:15:21.297848] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:eeeeeeee cdw11:eeee0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.847 [2024-10-09 00:15:21.297861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.847 [2024-10-09 00:15:21.297915] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:eeeeeeee cdw11:eeee0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.847 [2024-10-09 00:15:21.297928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:50.847 #29 NEW cov: 12427 ft: 14603 corp: 10/310b lim: 45 exec/s: 0 rss: 74Mb L: 37/40 MS: 1 ChangeBinInt- 00:07:50.847 [2024-10-09 00:15:21.357718] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.847 [2024-10-09 00:15:21.357742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.847 [2024-10-09 00:15:21.357798] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.847 [2024-10-09 00:15:21.357817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.847 [2024-10-09 00:15:21.357874] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.847 [2024-10-09 00:15:21.357888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.847 #30 NEW cov: 12427 ft: 14662 corp: 11/340b lim: 45 exec/s: 0 rss: 74Mb L: 30/40 MS: 1 ChangeBinInt- 00:07:50.847 [2024-10-09 00:15:21.397676] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:eeee00ee cdw11:eeee0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.847 [2024-10-09 00:15:21.397700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.847 [2024-10-09 00:15:21.397754] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:eeeeeeee cdw11:eeee0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.847 [2024-10-09 00:15:21.397768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.847 #31 NEW cov: 12427 ft: 14681 corp: 12/362b lim: 45 exec/s: 0 rss: 74Mb L: 22/40 MS: 1 ChangeByte- 00:07:50.847 [2024-10-09 00:15:21.457684] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:aeaeaeae cdw11:aeae0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.847 [2024-10-09 00:15:21.457708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.108 NEW_FUNC[1/1]: 0x1bfc8d8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:658 00:07:51.108 #34 NEW cov: 12450 ft: 15416 corp: 13/373b lim: 45 exec/s: 0 rss: 74Mb L: 11/40 MS: 3 InsertByte-EraseBytes-InsertRepeatedBytes- 00:07:51.108 [2024-10-09 00:15:21.498307] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:eeee0aee cdw11:eeee0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.108 [2024-10-09 00:15:21.498332] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.108 [2024-10-09 00:15:21.498404] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:eeeeeeee cdw11:eeee0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.108 [2024-10-09 00:15:21.498419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.108 [2024-10-09 00:15:21.498474] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:eeeeeeee cdw11:eeee0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.108 [2024-10-09 00:15:21.498487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.108 [2024-10-09 00:15:21.498542] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:eeeeeeee cdw11:eeee0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.108 [2024-10-09 00:15:21.498555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:51.108 #35 NEW cov: 12450 ft: 15439 corp: 14/410b lim: 45 exec/s: 0 rss: 74Mb L: 37/40 MS: 1 ChangeBit- 00:07:51.108 [2024-10-09 00:15:21.558450] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:eeee0aee cdw11:eeee0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.108 [2024-10-09 00:15:21.558474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.108 [2024-10-09 00:15:21.558545] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:eeeeeeee cdw11:eeee0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.109 [2024-10-09 00:15:21.558559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.109 [2024-10-09 00:15:21.558614] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:eeeeeeee cdw11:eeee0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.109 [2024-10-09 00:15:21.558627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.109 [2024-10-09 00:15:21.558681] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:eeeeeeee cdw11:eeee0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.109 [2024-10-09 00:15:21.558694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:51.109 #36 NEW cov: 12450 ft: 15467 corp: 15/447b lim: 45 exec/s: 36 rss: 74Mb L: 37/40 MS: 1 ShuffleBytes- 00:07:51.109 [2024-10-09 00:15:21.598589] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.109 [2024-10-09 00:15:21.598614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.109 [2024-10-09 00:15:21.598672] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.109 [2024-10-09 00:15:21.598690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.109 [2024-10-09 00:15:21.598746] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.109 [2024-10-09 00:15:21.598759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.109 [2024-10-09 00:15:21.598822] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:00000400 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.109 [2024-10-09 00:15:21.598836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:51.109 #37 NEW cov: 12450 ft: 15479 corp: 16/485b lim: 45 exec/s: 37 rss: 74Mb L: 38/40 MS: 1 CMP- DE: "\004\000\000\000\000\000\000\000"- 00:07:51.109 [2024-10-09 00:15:21.638673] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:eeee0aee cdw11:eeee0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.109 [2024-10-09 00:15:21.638700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.109 [2024-10-09 00:15:21.638757] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:eeeeeeee cdw11:eeee0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.109 [2024-10-09 00:15:21.638771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.109 [2024-10-09 00:15:21.638821] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:eeeeeeee cdw11:eeee0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.109 [2024-10-09 00:15:21.638832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.109 [2024-10-09 00:15:21.638868] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:eeeceeee cdw11:eeee0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.109 [2024-10-09 00:15:21.638882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:51.109 #38 NEW cov: 12450 ft: 15501 corp: 17/522b lim: 45 exec/s: 38 rss: 74Mb L: 37/40 MS: 1 CopyPart- 00:07:51.109 [2024-10-09 00:15:21.678624] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.109 [2024-10-09 00:15:21.678649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.109 [2024-10-09 00:15:21.678705] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.109 [2024-10-09 00:15:21.678719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.109 [2024-10-09 00:15:21.678773] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.109 [2024-10-09 00:15:21.678786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.109 #39 NEW cov: 12450 ft: 15534 corp: 18/552b lim: 45 exec/s: 39 rss: 74Mb L: 30/40 MS: 1 ChangeBit- 00:07:51.109 [2024-10-09 00:15:21.718943] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:aeaeaeae cdw11:ae7d0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.109 [2024-10-09 00:15:21.718968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.109 [2024-10-09 00:15:21.719024] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:7d7d7d7d cdw11:7d7d0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.109 [2024-10-09 00:15:21.719041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.109 [2024-10-09 00:15:21.719095] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:7d7d7d7d cdw11:7d7d0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.109 [2024-10-09 00:15:21.719108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.109 [2024-10-09 00:15:21.719162] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:7d7d7d7d cdw11:7d7d0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.109 [2024-10-09 00:15:21.719175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:51.368 #40 NEW cov: 12450 ft: 15561 corp: 19/593b lim: 45 exec/s: 40 rss: 74Mb L: 41/41 MS: 1 InsertRepeatedBytes- 00:07:51.368 [2024-10-09 00:15:21.779171] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:eeee0aee cdw11:eeee0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.368 [2024-10-09 00:15:21.779199] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.368 [2024-10-09 00:15:21.779256] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:eeeeeeee cdw11:eeee0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.368 [2024-10-09 00:15:21.779270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.368 [2024-10-09 00:15:21.779323] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:eeeeeeee cdw11:eeee0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.368 [2024-10-09 00:15:21.779337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.368 [2024-10-09 00:15:21.779391] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:eeeeeeee cdw11:eeee0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.368 [2024-10-09 00:15:21.779405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:51.368 #41 NEW cov: 12450 ft: 15591 corp: 20/630b lim: 45 exec/s: 41 rss: 74Mb L: 37/41 MS: 1 CopyPart- 00:07:51.368 [2024-10-09 00:15:21.839337] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:3b000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.368 [2024-10-09 00:15:21.839365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.368 [2024-10-09 00:15:21.839420] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.368 [2024-10-09 00:15:21.839434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.368 [2024-10-09 00:15:21.839486] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.369 [2024-10-09 00:15:21.839499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.369 [2024-10-09 00:15:21.839553] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:00000400 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.369 [2024-10-09 00:15:21.839567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:51.369 #42 NEW cov: 12450 ft: 15617 corp: 21/668b lim: 45 exec/s: 42 rss: 74Mb L: 38/41 MS: 1 ChangeByte- 00:07:51.369 [2024-10-09 00:15:21.899655] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:3b000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.369 [2024-10-09 00:15:21.899684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.369 [2024-10-09 00:15:21.899741] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.369 [2024-10-09 00:15:21.899755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.369 [2024-10-09 00:15:21.899809] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:0a000000 cdw11:00000001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.369 [2024-10-09 00:15:21.899827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.369 [2024-10-09 00:15:21.899882] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.369 [2024-10-09 00:15:21.899894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:51.369 [2024-10-09 00:15:21.899947] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.369 [2024-10-09 00:15:21.899962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:51.369 #48 NEW cov: 12450 ft: 15694 corp: 22/713b lim: 45 exec/s: 48 rss: 74Mb L: 45/45 MS: 1 CopyPart- 00:07:51.369 [2024-10-09 00:15:21.959612] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:eeeef20a cdw11:eeee0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.369 [2024-10-09 00:15:21.959639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.369 [2024-10-09 00:15:21.959695] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:eeeeeeee cdw11:eeee0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.369 [2024-10-09 00:15:21.959708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.369 [2024-10-09 00:15:21.959763] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:eeeeeeee cdw11:eeee0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.369 [2024-10-09 00:15:21.959776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.369 [2024-10-09 00:15:21.959836] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:eeeeeeee cdw11:eeee0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.369 [2024-10-09 00:15:21.959849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:51.369 #49 NEW cov: 12450 ft: 15723 corp: 23/751b lim: 45 exec/s: 49 rss: 74Mb L: 38/45 MS: 1 InsertByte- 00:07:51.369 [2024-10-09 00:15:21.999595] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:3b000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.369 [2024-10-09 00:15:21.999620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.369 [2024-10-09 00:15:21.999678] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.369 [2024-10-09 00:15:21.999691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.369 [2024-10-09 00:15:21.999746] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.369 [2024-10-09 00:15:21.999759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.628 #50 NEW cov: 12450 ft: 15731 corp: 24/779b lim: 45 exec/s: 50 rss: 74Mb L: 28/45 MS: 1 EraseBytes- 00:07:51.628 [2024-10-09 00:15:22.059940] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:aeaeaeae cdw11:ae7d0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.628 [2024-10-09 00:15:22.059965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.628 [2024-10-09 00:15:22.060020] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:7d7d7d7d cdw11:7d7d0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.628 [2024-10-09 00:15:22.060033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.628 [2024-10-09 00:15:22.060088] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:7d7d7d7d cdw11:7d7d0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.628 [2024-10-09 00:15:22.060101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.628 [2024-10-09 00:15:22.060157] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:7d7d7d7d cdw11:7d7d0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.628 [2024-10-09 00:15:22.060170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:51.628 #51 NEW cov: 12450 ft: 15763 corp: 25/820b lim: 45 exec/s: 51 rss: 75Mb L: 41/45 MS: 1 ChangeBinInt- 00:07:51.628 [2024-10-09 00:15:22.120117] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:eeee0aee cdw11:eeee0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.628 [2024-10-09 00:15:22.120141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.628 [2024-10-09 00:15:22.120199] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:eeeeeeee cdw11:eeee0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.628 [2024-10-09 00:15:22.120212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.628 [2024-10-09 00:15:22.120266] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:eeeeeeee cdw11:eeee0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.628 [2024-10-09 00:15:22.120280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.628 [2024-10-09 00:15:22.120330] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:eeeeeeee cdw11:eeee0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.628 [2024-10-09 00:15:22.120343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:51.628 #52 NEW cov: 12450 ft: 15771 corp: 26/858b lim: 45 exec/s: 52 rss: 75Mb L: 38/45 MS: 1 InsertByte- 00:07:51.628 [2024-10-09 00:15:22.159797] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:eeee0aee cdw11:eeee0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.628 [2024-10-09 00:15:22.159826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.628 [2024-10-09 00:15:22.159882] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:eeeeeeee cdw11:eeee0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.628 [2024-10-09 00:15:22.159897] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.628 #53 NEW cov: 12450 ft: 15795 corp: 27/878b lim: 45 exec/s: 53 rss: 75Mb L: 20/45 MS: 1 EraseBytes- 00:07:51.628 [2024-10-09 00:15:22.220047] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:eeee00ee cdw11:eeee0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.628 [2024-10-09 00:15:22.220074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.628 [2024-10-09 00:15:22.220144] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:eeeeeece cdw11:eeee0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.628 [2024-10-09 00:15:22.220158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.887 #54 NEW cov: 12450 ft: 15801 corp: 28/900b lim: 45 exec/s: 54 rss: 75Mb L: 22/45 MS: 1 ChangeBit- 00:07:51.887 [2024-10-09 00:15:22.280371] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.887 [2024-10-09 00:15:22.280396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.887 [2024-10-09 00:15:22.280454] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.887 [2024-10-09 00:15:22.280468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.887 [2024-10-09 00:15:22.280523] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.887 [2024-10-09 00:15:22.280537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.887 #55 NEW cov: 12450 ft: 15836 corp: 29/935b lim: 45 exec/s: 55 rss: 75Mb L: 35/45 MS: 1 InsertRepeatedBytes- 00:07:51.887 [2024-10-09 00:15:22.320739] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:eeee0aee cdw11:eeee0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.887 [2024-10-09 00:15:22.320763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.887 [2024-10-09 00:15:22.320819] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:eeeeeeee cdw11:eeee0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.887 [2024-10-09 00:15:22.320832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.887 [2024-10-09 00:15:22.320885] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:eeeeeeee cdw11:eeee0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.887 [2024-10-09 00:15:22.320899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.887 [2024-10-09 00:15:22.320952] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:eeeeeeee cdw11:eeee0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.887 [2024-10-09 00:15:22.320966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:51.888 [2024-10-09 00:15:22.321019] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:8 nsid:0 cdw10:00000400 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.888 [2024-10-09 00:15:22.321032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:51.888 #56 NEW cov: 12450 ft: 15873 corp: 30/980b lim: 45 exec/s: 56 rss: 75Mb L: 45/45 MS: 1 PersAutoDict- DE: "\004\000\000\000\000\000\000\000"- 00:07:51.888 [2024-10-09 00:15:22.380786] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:eeee0aee cdw11:86ee0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.888 [2024-10-09 00:15:22.380810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.888 [2024-10-09 00:15:22.380870] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:eeeeeeee cdw11:eeee0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.888 [2024-10-09 00:15:22.380888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.888 [2024-10-09 00:15:22.380942] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:eeeeeeee cdw11:eeee0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.888 [2024-10-09 00:15:22.380955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.888 [2024-10-09 00:15:22.381008] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:eeeeeeee cdw11:eeee0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.888 [2024-10-09 00:15:22.381022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:51.888 #57 NEW cov: 12450 ft: 15877 corp: 31/1017b lim: 45 exec/s: 57 rss: 75Mb L: 37/45 MS: 1 ChangeByte- 00:07:51.888 [2024-10-09 00:15:22.420716] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00030a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.888 [2024-10-09 00:15:22.420741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.888 [2024-10-09 00:15:22.420799] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.888 [2024-10-09 00:15:22.420817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.888 [2024-10-09 00:15:22.420872] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.888 [2024-10-09 00:15:22.420886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.888 #58 NEW cov: 12450 ft: 15932 corp: 32/1047b lim: 45 exec/s: 58 rss: 75Mb L: 30/45 MS: 1 CMP- DE: "\003\000\000\000\000\000\000\000"- 00:07:51.888 [2024-10-09 00:15:22.481060] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.888 [2024-10-09 00:15:22.481085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.888 [2024-10-09 00:15:22.481140] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.888 [2024-10-09 00:15:22.481153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.888 [2024-10-09 00:15:22.481207] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.888 [2024-10-09 00:15:22.481220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.888 [2024-10-09 00:15:22.481272] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.888 [2024-10-09 00:15:22.481285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:52.147 #59 NEW cov: 12450 ft: 15940 corp: 33/1091b lim: 45 exec/s: 59 rss: 75Mb L: 44/45 MS: 1 InsertRepeatedBytes- 00:07:52.147 [2024-10-09 00:15:22.541204] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000400 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.147 [2024-10-09 00:15:22.541230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.147 [2024-10-09 00:15:22.541286] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.147 [2024-10-09 00:15:22.541302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.147 [2024-10-09 00:15:22.541372] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.147 [2024-10-09 00:15:22.541386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:52.147 [2024-10-09 00:15:22.541442] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.147 [2024-10-09 00:15:22.541455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:52.147 #60 NEW cov: 12450 ft: 15944 corp: 34/1129b lim: 45 exec/s: 30 rss: 75Mb L: 38/45 MS: 1 PersAutoDict- DE: "\004\000\000\000\000\000\000\000"- 00:07:52.147 #60 DONE cov: 12450 ft: 15944 corp: 34/1129b lim: 45 exec/s: 30 rss: 75Mb 00:07:52.147 ###### Recommended dictionary. ###### 00:07:52.147 "\004\000\000\000\000\000\000\000" # Uses: 3 00:07:52.147 "\003\000\000\000\000\000\000\000" # Uses: 0 00:07:52.147 ###### End of recommended dictionary. ###### 00:07:52.147 Done 60 runs in 2 second(s) 00:07:52.147 00:15:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_5.conf /var/tmp/suppress_nvmf_fuzz 00:07:52.147 00:15:22 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:52.147 00:15:22 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:52.147 00:15:22 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:07:52.147 00:15:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=6 00:07:52.147 00:15:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:52.147 00:15:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:52.147 00:15:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:07:52.147 00:15:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_6.conf 00:07:52.147 00:15:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:52.147 00:15:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:52.147 00:15:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 6 00:07:52.147 00:15:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4406 00:07:52.147 00:15:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:07:52.147 00:15:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' 00:07:52.147 00:15:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4406"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:52.147 00:15:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:52.147 00:15:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:52.147 00:15:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' -c /tmp/fuzz_json_6.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 -Z 6 00:07:52.147 [2024-10-09 00:15:22.765606] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:07:52.147 [2024-10-09 00:15:22.765671] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3887483 ] 00:07:52.505 [2024-10-09 00:15:22.967295] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.505 [2024-10-09 00:15:23.042415] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.505 [2024-10-09 00:15:23.102197] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:52.505 [2024-10-09 00:15:23.118422] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4406 *** 00:07:52.778 INFO: Running with entropic power schedule (0xFF, 100). 00:07:52.778 INFO: Seed: 2689082495 00:07:52.779 INFO: Loaded 1 modules (384346 inline 8-bit counters): 384346 [0x2bec84c, 0x2c4a5a6), 00:07:52.779 INFO: Loaded 1 PC tables (384346 PCs): 384346 [0x2c4a5a8,0x3227b48), 00:07:52.779 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:07:52.779 INFO: A corpus is not provided, starting from an empty corpus 00:07:52.779 #2 INITED exec/s: 0 rss: 66Mb 00:07:52.779 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:52.779 This may also happen if the target rejected all inputs we tried so far 00:07:52.779 [2024-10-09 00:15:23.173887] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:52.779 [2024-10-09 00:15:23.173918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.093 NEW_FUNC[1/713]: 0x446708 in fuzz_admin_delete_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:161 00:07:53.093 NEW_FUNC[2/713]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:53.093 #5 NEW cov: 12140 ft: 12136 corp: 2/3b lim: 10 exec/s: 0 rss: 73Mb L: 2/2 MS: 3 ChangeBit-CrossOver-CopyPart- 00:07:53.093 [2024-10-09 00:15:23.514949] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000bcbc cdw11:00000000 00:07:53.093 [2024-10-09 00:15:23.514986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.093 [2024-10-09 00:15:23.515037] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000bcbc cdw11:00000000 00:07:53.093 [2024-10-09 00:15:23.515051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.093 [2024-10-09 00:15:23.515101] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000bc0a cdw11:00000000 00:07:53.093 [2024-10-09 00:15:23.515114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:53.093 #6 NEW cov: 12253 ft: 13009 corp: 3/9b lim: 10 exec/s: 0 rss: 73Mb L: 6/6 MS: 1 InsertRepeatedBytes- 00:07:53.093 [2024-10-09 00:15:23.555078] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a01 cdw11:00000000 00:07:53.093 [2024-10-09 00:15:23.555104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.093 [2024-10-09 00:15:23.555155] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00002718 cdw11:00000000 00:07:53.093 [2024-10-09 00:15:23.555168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.093 [2024-10-09 00:15:23.555218] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000968d cdw11:00000000 00:07:53.093 [2024-10-09 00:15:23.555231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:53.093 [2024-10-09 00:15:23.555281] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000c1a5 cdw11:00000000 00:07:53.093 [2024-10-09 00:15:23.555294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:53.093 #7 NEW cov: 12259 ft: 13452 corp: 4/18b lim: 10 exec/s: 0 rss: 73Mb L: 9/9 MS: 1 CMP- DE: "\001'\030\226\215\301\245j"- 00:07:53.093 [2024-10-09 00:15:23.595148] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000bcd6 cdw11:00000000 00:07:53.093 [2024-10-09 00:15:23.595173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.093 [2024-10-09 00:15:23.595222] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000d6d6 cdw11:00000000 00:07:53.093 [2024-10-09 00:15:23.595236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.093 [2024-10-09 00:15:23.595286] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000bcbc cdw11:00000000 00:07:53.093 [2024-10-09 00:15:23.595300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:53.093 [2024-10-09 00:15:23.595350] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000bcbc cdw11:00000000 00:07:53.093 [2024-10-09 00:15:23.595363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:53.093 #8 NEW cov: 12344 ft: 13652 corp: 5/27b lim: 10 exec/s: 0 rss: 73Mb L: 9/9 MS: 1 InsertRepeatedBytes- 00:07:53.093 [2024-10-09 00:15:23.655348] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000127 cdw11:00000000 00:07:53.093 [2024-10-09 00:15:23.655372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.093 [2024-10-09 00:15:23.655423] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00001896 cdw11:00000000 00:07:53.093 [2024-10-09 00:15:23.655437] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.093 [2024-10-09 00:15:23.655487] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00008dc1 cdw11:00000000 00:07:53.093 [2024-10-09 00:15:23.655501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:53.093 [2024-10-09 00:15:23.655550] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000a56a cdw11:00000000 00:07:53.093 [2024-10-09 00:15:23.655564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:53.093 #9 NEW cov: 12344 ft: 13685 corp: 6/35b lim: 10 exec/s: 0 rss: 74Mb L: 8/9 MS: 1 EraseBytes- 00:07:53.351 [2024-10-09 00:15:23.715461] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000bcd6 cdw11:00000000 00:07:53.351 [2024-10-09 00:15:23.715486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.351 [2024-10-09 00:15:23.715541] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000d6d6 cdw11:00000000 00:07:53.351 [2024-10-09 00:15:23.715555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.351 [2024-10-09 00:15:23.715603] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000bcbc cdw11:00000000 00:07:53.351 [2024-10-09 00:15:23.715617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:53.351 [2024-10-09 00:15:23.715668] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000d6d6 cdw11:00000000 00:07:53.351 [2024-10-09 00:15:23.715682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:53.351 #10 NEW cov: 12344 ft: 13727 corp: 7/44b lim: 10 exec/s: 0 rss: 74Mb L: 9/9 MS: 1 CopyPart- 00:07:53.351 [2024-10-09 00:15:23.775628] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000bcd6 cdw11:00000000 00:07:53.351 [2024-10-09 00:15:23.775656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.351 [2024-10-09 00:15:23.775705] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000900 cdw11:00000000 00:07:53.351 [2024-10-09 00:15:23.775719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.351 [2024-10-09 00:15:23.775771] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:53.351 [2024-10-09 00:15:23.775785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:53.351 [2024-10-09 00:15:23.775854] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000bcbc cdw11:00000000 00:07:53.351 [2024-10-09 00:15:23.775868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:53.351 #11 NEW cov: 12344 ft: 13854 corp: 8/53b lim: 10 exec/s: 0 rss: 74Mb L: 9/9 MS: 1 ChangeBinInt- 00:07:53.351 [2024-10-09 00:15:23.815895] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000127 cdw11:00000000 00:07:53.351 [2024-10-09 00:15:23.815920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.351 [2024-10-09 00:15:23.815969] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00001896 cdw11:00000000 00:07:53.351 [2024-10-09 00:15:23.815983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.351 [2024-10-09 00:15:23.816033] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00008dc1 cdw11:00000000 00:07:53.351 [2024-10-09 00:15:23.816046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:53.351 [2024-10-09 00:15:23.816096] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00008dc1 cdw11:00000000 00:07:53.351 [2024-10-09 00:15:23.816110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:53.351 [2024-10-09 00:15:23.816158] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000a56a cdw11:00000000 00:07:53.351 [2024-10-09 00:15:23.816172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:53.351 #12 NEW cov: 12344 ft: 13997 corp: 9/63b lim: 10 exec/s: 0 rss: 74Mb L: 10/10 MS: 1 CrossOver- 00:07:53.351 [2024-10-09 00:15:23.875934] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000bccc cdw11:00000000 00:07:53.351 [2024-10-09 00:15:23.875960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.351 [2024-10-09 00:15:23.876011] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000d6d6 cdw11:00000000 00:07:53.351 [2024-10-09 00:15:23.876024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.351 [2024-10-09 00:15:23.876074] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000bcbc cdw11:00000000 00:07:53.351 [2024-10-09 00:15:23.876087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:53.351 [2024-10-09 00:15:23.876135] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000bcbc cdw11:00000000 00:07:53.351 [2024-10-09 00:15:23.876149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:53.351 #13 NEW cov: 12344 ft: 14077 corp: 10/72b lim: 10 exec/s: 0 rss: 74Mb L: 9/10 MS: 1 ChangeByte- 00:07:53.351 [2024-10-09 00:15:23.915954] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000008d cdw11:00000000 00:07:53.351 [2024-10-09 00:15:23.915981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.351 [2024-10-09 00:15:23.916033] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000c18d cdw11:00000000 00:07:53.352 [2024-10-09 00:15:23.916047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.352 [2024-10-09 00:15:23.916098] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000c1a5 cdw11:00000000 00:07:53.352 [2024-10-09 00:15:23.916112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:53.352 #15 NEW cov: 12344 ft: 14185 corp: 11/79b lim: 10 exec/s: 0 rss: 74Mb L: 7/10 MS: 2 ChangeByte-CrossOver- 00:07:53.352 [2024-10-09 00:15:23.956185] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000127 cdw11:00000000 00:07:53.352 [2024-10-09 00:15:23.956212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.352 [2024-10-09 00:15:23.956266] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000800 cdw11:00000000 00:07:53.352 [2024-10-09 00:15:23.956282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.352 [2024-10-09 00:15:23.956335] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00008dc1 cdw11:00000000 00:07:53.352 [2024-10-09 00:15:23.956351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:53.352 [2024-10-09 00:15:23.956404] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000a56a cdw11:00000000 00:07:53.352 [2024-10-09 00:15:23.956420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:53.352 #16 NEW cov: 12344 ft: 14214 corp: 12/87b lim: 10 exec/s: 0 rss: 74Mb L: 8/10 MS: 1 ChangeBinInt- 00:07:53.610 [2024-10-09 00:15:23.996323] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000acc cdw11:00000000 00:07:53.610 [2024-10-09 00:15:23.996348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.610 [2024-10-09 00:15:23.996400] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000d6d6 cdw11:00000000 00:07:53.610 [2024-10-09 00:15:23.996413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.610 [2024-10-09 00:15:23.996461] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000bcbc cdw11:00000000 00:07:53.610 [2024-10-09 00:15:23.996474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:53.610 [2024-10-09 00:15:23.996523] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000bcbc cdw11:00000000 00:07:53.610 [2024-10-09 00:15:23.996537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:53.610 #17 NEW cov: 12344 ft: 14226 corp: 13/96b lim: 10 exec/s: 0 rss: 74Mb L: 9/10 MS: 1 CrossOver- 00:07:53.610 [2024-10-09 00:15:24.056336] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000bcbc cdw11:00000000 00:07:53.611 [2024-10-09 00:15:24.056361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.611 [2024-10-09 00:15:24.056416] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00004dbc cdw11:00000000 00:07:53.611 [2024-10-09 00:15:24.056430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.611 [2024-10-09 00:15:24.056478] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000bc0a cdw11:00000000 00:07:53.611 [2024-10-09 00:15:24.056492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:53.611 NEW_FUNC[1/1]: 0x1bfc8d8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:658 00:07:53.611 #18 NEW cov: 12367 ft: 14281 corp: 14/102b lim: 10 exec/s: 0 rss: 74Mb L: 6/10 MS: 1 ChangeBinInt- 00:07:53.611 [2024-10-09 00:15:24.096244] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000bcbc cdw11:00000000 00:07:53.611 [2024-10-09 00:15:24.096268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.611 #19 NEW cov: 12367 ft: 14339 corp: 15/105b lim: 10 exec/s: 0 rss: 74Mb L: 3/10 MS: 1 EraseBytes- 00:07:53.611 [2024-10-09 00:15:24.156770] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000bcd6 cdw11:00000000 00:07:53.611 [2024-10-09 00:15:24.156794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.611 [2024-10-09 00:15:24.156847] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000200 cdw11:00000000 00:07:53.611 [2024-10-09 00:15:24.156860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.611 [2024-10-09 00:15:24.156909] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:53.611 [2024-10-09 00:15:24.156922] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:53.611 [2024-10-09 00:15:24.156969] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000bcbc cdw11:00000000 00:07:53.611 [2024-10-09 00:15:24.156982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:53.611 #20 NEW cov: 12367 ft: 14358 corp: 16/114b lim: 10 exec/s: 20 rss: 74Mb L: 9/10 MS: 1 ChangeBinInt- 00:07:53.611 [2024-10-09 00:15:24.216800] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000127 cdw11:00000000 00:07:53.611 [2024-10-09 00:15:24.216828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.611 [2024-10-09 00:15:24.216880] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00001896 cdw11:00000000 00:07:53.611 [2024-10-09 00:15:24.216894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.611 [2024-10-09 00:15:24.216942] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00008d6a cdw11:00000000 00:07:53.611 [2024-10-09 00:15:24.216956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:53.611 #21 NEW cov: 12367 ft: 14388 corp: 17/120b lim: 10 exec/s: 21 rss: 74Mb L: 6/10 MS: 1 EraseBytes- 00:07:53.870 [2024-10-09 00:15:24.257027] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a01 cdw11:00000000 00:07:53.870 [2024-10-09 00:15:24.257054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.870 [2024-10-09 00:15:24.257104] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00002718 cdw11:00000000 00:07:53.870 [2024-10-09 00:15:24.257122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.870 [2024-10-09 00:15:24.257170] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000968d cdw11:00000000 00:07:53.870 [2024-10-09 00:15:24.257184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:53.870 [2024-10-09 00:15:24.257232] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:000049a5 cdw11:00000000 00:07:53.870 [2024-10-09 00:15:24.257246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:53.870 #22 NEW cov: 12367 ft: 14403 corp: 18/129b lim: 10 exec/s: 22 rss: 74Mb L: 9/10 MS: 1 ChangeBinInt- 00:07:53.870 [2024-10-09 00:15:24.296928] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000bcbc cdw11:00000000 00:07:53.870 [2024-10-09 00:15:24.296953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.870 [2024-10-09 00:15:24.297005] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000bcbc cdw11:00000000 00:07:53.870 [2024-10-09 00:15:24.297019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.870 #23 NEW cov: 12367 ft: 14576 corp: 19/134b lim: 10 exec/s: 23 rss: 74Mb L: 5/10 MS: 1 EraseBytes- 00:07:53.870 [2024-10-09 00:15:24.336931] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000023bc cdw11:00000000 00:07:53.870 [2024-10-09 00:15:24.336958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.870 #24 NEW cov: 12367 ft: 14659 corp: 20/137b lim: 10 exec/s: 24 rss: 74Mb L: 3/10 MS: 1 ChangeByte- 00:07:53.870 [2024-10-09 00:15:24.397181] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000bcbc cdw11:00000000 00:07:53.870 [2024-10-09 00:15:24.397207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.870 [2024-10-09 00:15:24.397257] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000d6d6 cdw11:00000000 00:07:53.870 [2024-10-09 00:15:24.397270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.870 #25 NEW cov: 12367 ft: 14684 corp: 21/142b lim: 10 exec/s: 25 rss: 74Mb L: 5/10 MS: 1 CrossOver- 00:07:53.870 [2024-10-09 00:15:24.457598] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002701 cdw11:00000000 00:07:53.870 [2024-10-09 00:15:24.457623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.870 [2024-10-09 00:15:24.457674] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000180a cdw11:00000000 00:07:53.870 [2024-10-09 00:15:24.457688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.870 [2024-10-09 00:15:24.457738] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000968d cdw11:00000000 00:07:53.870 [2024-10-09 00:15:24.457752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:53.870 [2024-10-09 00:15:24.457801] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:000049a5 cdw11:00000000 00:07:53.870 [2024-10-09 00:15:24.457821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:53.870 #26 NEW cov: 12367 ft: 14709 corp: 22/151b lim: 10 exec/s: 26 rss: 74Mb L: 9/10 MS: 1 ShuffleBytes- 00:07:54.129 [2024-10-09 00:15:24.517757] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:54.129 [2024-10-09 00:15:24.517783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.129 [2024-10-09 00:15:24.517835] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffbc cdw11:00000000 00:07:54.129 [2024-10-09 00:15:24.517849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.129 [2024-10-09 00:15:24.517898] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000bc4d cdw11:00000000 00:07:54.129 [2024-10-09 00:15:24.517911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.129 [2024-10-09 00:15:24.517959] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000bcbc cdw11:00000000 00:07:54.129 [2024-10-09 00:15:24.517973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:54.129 #27 NEW cov: 12367 ft: 14716 corp: 23/160b lim: 10 exec/s: 27 rss: 74Mb L: 9/10 MS: 1 InsertRepeatedBytes- 00:07:54.129 [2024-10-09 00:15:24.557875] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:54.129 [2024-10-09 00:15:24.557899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.129 [2024-10-09 00:15:24.557950] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffbc cdw11:00000000 00:07:54.129 [2024-10-09 00:15:24.557963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.129 [2024-10-09 00:15:24.558012] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000bcbc cdw11:00000000 00:07:54.129 [2024-10-09 00:15:24.558026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.129 [2024-10-09 00:15:24.558075] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00004dbc cdw11:00000000 00:07:54.129 [2024-10-09 00:15:24.558089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:54.129 #28 NEW cov: 12367 ft: 14721 corp: 24/169b lim: 10 exec/s: 28 rss: 74Mb L: 9/10 MS: 1 ShuffleBytes- 00:07:54.129 [2024-10-09 00:15:24.617821] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00008dc1 cdw11:00000000 00:07:54.129 [2024-10-09 00:15:24.617846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.129 [2024-10-09 00:15:24.617898] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000a56a cdw11:00000000 00:07:54.129 [2024-10-09 00:15:24.617911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.129 #29 NEW cov: 12367 ft: 14738 corp: 25/173b lim: 10 exec/s: 29 rss: 75Mb L: 4/10 MS: 1 EraseBytes- 00:07:54.129 [2024-10-09 00:15:24.678228] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000127 cdw11:00000000 00:07:54.129 [2024-10-09 00:15:24.678253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.129 [2024-10-09 00:15:24.678304] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000800 cdw11:00000000 00:07:54.129 [2024-10-09 00:15:24.678317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.129 [2024-10-09 00:15:24.678366] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00002cc1 cdw11:00000000 00:07:54.129 [2024-10-09 00:15:24.678383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.129 [2024-10-09 00:15:24.678432] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000a56a cdw11:00000000 00:07:54.129 [2024-10-09 00:15:24.678446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:54.129 #30 NEW cov: 12367 ft: 14741 corp: 26/181b lim: 10 exec/s: 30 rss: 75Mb L: 8/10 MS: 1 ChangeByte- 00:07:54.129 [2024-10-09 00:15:24.738373] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:54.129 [2024-10-09 00:15:24.738398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.129 [2024-10-09 00:15:24.738447] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffbc cdw11:00000000 00:07:54.129 [2024-10-09 00:15:24.738461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.129 [2024-10-09 00:15:24.738510] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000c34d cdw11:00000000 00:07:54.129 [2024-10-09 00:15:24.738523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.129 [2024-10-09 00:15:24.738573] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000bcbc cdw11:00000000 00:07:54.129 [2024-10-09 00:15:24.738586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:54.129 #31 NEW cov: 12367 ft: 14750 corp: 27/190b lim: 10 exec/s: 31 rss: 75Mb L: 9/10 MS: 1 ChangeBinInt- 00:07:54.388 [2024-10-09 00:15:24.778487] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000027a5 cdw11:00000000 00:07:54.388 [2024-10-09 00:15:24.778512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.388 [2024-10-09 00:15:24.778562] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000c108 cdw11:00000000 00:07:54.388 [2024-10-09 00:15:24.778575] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.388 [2024-10-09 00:15:24.778625] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00002c00 cdw11:00000000 00:07:54.388 [2024-10-09 00:15:24.778638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.388 [2024-10-09 00:15:24.778688] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000016a cdw11:00000000 00:07:54.388 [2024-10-09 00:15:24.778701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:54.388 #32 NEW cov: 12367 ft: 14789 corp: 28/198b lim: 10 exec/s: 32 rss: 75Mb L: 8/10 MS: 1 ShuffleBytes- 00:07:54.388 [2024-10-09 00:15:24.838547] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000bcbc cdw11:00000000 00:07:54.388 [2024-10-09 00:15:24.838572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.388 [2024-10-09 00:15:24.838622] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:000055bc cdw11:00000000 00:07:54.388 [2024-10-09 00:15:24.838635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.388 [2024-10-09 00:15:24.838685] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000bc0a cdw11:00000000 00:07:54.388 [2024-10-09 00:15:24.838701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.388 #33 NEW cov: 12367 ft: 14800 corp: 29/204b lim: 10 exec/s: 33 rss: 75Mb L: 6/10 MS: 1 ChangeByte- 00:07:54.388 [2024-10-09 00:15:24.878435] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000bc41 cdw11:00000000 00:07:54.388 [2024-10-09 00:15:24.878460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.388 #34 NEW cov: 12367 ft: 14808 corp: 30/207b lim: 10 exec/s: 34 rss: 75Mb L: 3/10 MS: 1 ChangeBinInt- 00:07:54.388 [2024-10-09 00:15:24.918995] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002701 cdw11:00000000 00:07:54.388 [2024-10-09 00:15:24.919020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.388 [2024-10-09 00:15:24.919075] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000180a cdw11:00000000 00:07:54.388 [2024-10-09 00:15:24.919088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.388 [2024-10-09 00:15:24.919138] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000962f cdw11:00000000 00:07:54.388 [2024-10-09 00:15:24.919151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.388 [2024-10-09 00:15:24.919201] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00008d49 cdw11:00000000 00:07:54.388 [2024-10-09 00:15:24.919215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:54.388 [2024-10-09 00:15:24.919266] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000a56a cdw11:00000000 00:07:54.388 [2024-10-09 00:15:24.919279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:54.388 #35 NEW cov: 12367 ft: 14812 corp: 31/217b lim: 10 exec/s: 35 rss: 75Mb L: 10/10 MS: 1 InsertByte- 00:07:54.388 [2024-10-09 00:15:24.978679] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a3d cdw11:00000000 00:07:54.388 [2024-10-09 00:15:24.978704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.388 #37 NEW cov: 12367 ft: 14847 corp: 32/219b lim: 10 exec/s: 37 rss: 75Mb L: 2/10 MS: 2 EraseBytes-InsertByte- 00:07:54.647 [2024-10-09 00:15:25.039207] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002c6a cdw11:00000000 00:07:54.647 [2024-10-09 00:15:25.039232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.647 [2024-10-09 00:15:25.039285] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000100 cdw11:00000000 00:07:54.647 [2024-10-09 00:15:25.039299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.647 [2024-10-09 00:15:25.039349] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000a5c1 cdw11:00000000 00:07:54.647 [2024-10-09 00:15:25.039363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.647 [2024-10-09 00:15:25.039412] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00002708 cdw11:00000000 00:07:54.647 [2024-10-09 00:15:25.039425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:54.647 #38 NEW cov: 12367 ft: 14851 corp: 33/227b lim: 10 exec/s: 38 rss: 75Mb L: 8/10 MS: 1 ShuffleBytes- 00:07:54.647 [2024-10-09 00:15:25.099248] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000a56a cdw11:00000000 00:07:54.647 [2024-10-09 00:15:25.099276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.647 [2024-10-09 00:15:25.099328] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000bcbc cdw11:00000000 00:07:54.647 [2024-10-09 00:15:25.099342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.647 [2024-10-09 00:15:25.099392] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000bcbc cdw11:00000000 00:07:54.647 [2024-10-09 00:15:25.099406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.647 #39 NEW cov: 12367 ft: 14869 corp: 34/234b lim: 10 exec/s: 39 rss: 75Mb L: 7/10 MS: 1 CrossOver- 00:07:54.647 [2024-10-09 00:15:25.159383] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000bc00 cdw11:00000000 00:07:54.647 [2024-10-09 00:15:25.159407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.647 [2024-10-09 00:15:25.159459] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:54.647 [2024-10-09 00:15:25.159473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.647 [2024-10-09 00:15:25.159523] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000bcbc cdw11:00000000 00:07:54.647 [2024-10-09 00:15:25.159536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.647 #40 NEW cov: 12367 ft: 14881 corp: 35/241b lim: 10 exec/s: 20 rss: 75Mb L: 7/10 MS: 1 EraseBytes- 00:07:54.647 #40 DONE cov: 12367 ft: 14881 corp: 35/241b lim: 10 exec/s: 20 rss: 75Mb 00:07:54.647 ###### Recommended dictionary. ###### 00:07:54.647 "\001'\030\226\215\301\245j" # Uses: 0 00:07:54.647 ###### End of recommended dictionary. ###### 00:07:54.647 Done 40 runs in 2 second(s) 00:07:54.906 00:15:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_6.conf /var/tmp/suppress_nvmf_fuzz 00:07:54.906 00:15:25 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:54.906 00:15:25 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:54.906 00:15:25 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 7 1 0x1 00:07:54.906 00:15:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=7 00:07:54.906 00:15:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:54.906 00:15:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:54.906 00:15:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:07:54.906 00:15:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_7.conf 00:07:54.906 00:15:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:54.906 00:15:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:54.906 00:15:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 7 00:07:54.906 00:15:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4407 00:07:54.906 00:15:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:07:54.906 00:15:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' 00:07:54.906 00:15:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4407"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:54.906 00:15:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:54.906 00:15:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:54.906 00:15:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' -c /tmp/fuzz_json_7.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 -Z 7 00:07:54.906 [2024-10-09 00:15:25.389022] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:07:54.906 [2024-10-09 00:15:25.389089] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3887851 ] 00:07:55.165 [2024-10-09 00:15:25.587084] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.165 [2024-10-09 00:15:25.659963] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.165 [2024-10-09 00:15:25.719380] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:55.165 [2024-10-09 00:15:25.735621] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4407 *** 00:07:55.165 INFO: Running with entropic power schedule (0xFF, 100). 00:07:55.165 INFO: Seed: 1011110982 00:07:55.165 INFO: Loaded 1 modules (384346 inline 8-bit counters): 384346 [0x2bec84c, 0x2c4a5a6), 00:07:55.165 INFO: Loaded 1 PC tables (384346 PCs): 384346 [0x2c4a5a8,0x3227b48), 00:07:55.165 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:07:55.165 INFO: A corpus is not provided, starting from an empty corpus 00:07:55.165 #2 INITED exec/s: 0 rss: 66Mb 00:07:55.165 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:55.165 This may also happen if the target rejected all inputs we tried so far 00:07:55.424 [2024-10-09 00:15:25.813014] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:55.424 [2024-10-09 00:15:25.813054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.424 [2024-10-09 00:15:25.813152] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 00:07:55.424 [2024-10-09 00:15:25.813168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.682 NEW_FUNC[1/713]: 0x447108 in fuzz_admin_delete_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:172 00:07:55.682 NEW_FUNC[2/713]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:55.682 #4 NEW cov: 12122 ft: 12123 corp: 2/6b lim: 10 exec/s: 0 rss: 73Mb L: 5/5 MS: 2 CrossOver-CMP- DE: "\000\000\000\001"- 00:07:55.682 [2024-10-09 00:15:26.164524] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00008383 cdw11:00000000 00:07:55.682 [2024-10-09 00:15:26.164573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.682 [2024-10-09 00:15:26.164676] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00008383 cdw11:00000000 00:07:55.682 [2024-10-09 00:15:26.164697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.682 #5 NEW cov: 12252 ft: 12866 corp: 3/11b lim: 10 exec/s: 0 rss: 74Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:07:55.682 [2024-10-09 00:15:26.214543] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:55.682 [2024-10-09 00:15:26.214570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.682 [2024-10-09 00:15:26.214656] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 00:07:55.682 [2024-10-09 00:15:26.214674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.682 #6 NEW cov: 12258 ft: 13054 corp: 4/16b lim: 10 exec/s: 0 rss: 74Mb L: 5/5 MS: 1 PersAutoDict- DE: "\000\000\000\001"- 00:07:55.683 [2024-10-09 00:15:26.285348] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00008383 cdw11:00000000 00:07:55.683 [2024-10-09 00:15:26.285375] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.683 [2024-10-09 00:15:26.285455] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00008383 cdw11:00000000 00:07:55.683 [2024-10-09 00:15:26.285470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.683 [2024-10-09 00:15:26.285559] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:55.683 [2024-10-09 00:15:26.285575] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:55.683 [2024-10-09 00:15:26.285656] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000001 cdw11:00000000 00:07:55.683 [2024-10-09 00:15:26.285671] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:55.683 #7 NEW cov: 12343 ft: 13470 corp: 5/25b lim: 10 exec/s: 0 rss: 74Mb L: 9/9 MS: 1 PersAutoDict- DE: "\000\000\000\001"- 00:07:55.941 [2024-10-09 00:15:26.335176] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:55.941 [2024-10-09 00:15:26.335203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.941 [2024-10-09 00:15:26.335290] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:55.941 [2024-10-09 00:15:26.335307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.941 #8 NEW cov: 12343 ft: 13660 corp: 6/30b lim: 10 exec/s: 0 rss: 74Mb L: 5/9 MS: 1 CopyPart- 00:07:55.941 [2024-10-09 00:15:26.405476] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000e100 cdw11:00000000 00:07:55.941 [2024-10-09 00:15:26.405502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.941 [2024-10-09 00:15:26.405597] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:55.941 [2024-10-09 00:15:26.405612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.941 #9 NEW cov: 12343 ft: 13725 corp: 7/35b lim: 10 exec/s: 0 rss: 74Mb L: 5/9 MS: 1 ChangeByte- 00:07:55.941 [2024-10-09 00:15:26.475707] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00003f00 cdw11:00000000 00:07:55.941 [2024-10-09 00:15:26.475732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.941 [2024-10-09 00:15:26.475817] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 00:07:55.941 [2024-10-09 00:15:26.475847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.941 #10 NEW cov: 12343 ft: 13827 corp: 8/40b lim: 10 exec/s: 0 rss: 74Mb L: 5/9 MS: 1 ChangeByte- 00:07:55.941 [2024-10-09 00:15:26.525996] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00008383 cdw11:00000000 00:07:55.941 [2024-10-09 00:15:26.526022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.941 [2024-10-09 00:15:26.526105] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00002383 cdw11:00000000 00:07:55.941 [2024-10-09 00:15:26.526124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.941 #11 NEW cov: 12343 ft: 13900 corp: 9/45b lim: 10 exec/s: 0 rss: 74Mb L: 5/9 MS: 1 ChangeByte- 00:07:56.199 [2024-10-09 00:15:26.576961] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:56.199 [2024-10-09 00:15:26.576987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.199 [2024-10-09 00:15:26.577071] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:56.199 [2024-10-09 00:15:26.577087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.199 [2024-10-09 00:15:26.577180] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000010a cdw11:00000000 00:07:56.199 [2024-10-09 00:15:26.577196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:56.199 [2024-10-09 00:15:26.577284] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:56.199 [2024-10-09 00:15:26.577300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:56.199 [2024-10-09 00:15:26.577363] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000010a cdw11:00000000 00:07:56.199 [2024-10-09 00:15:26.577378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:56.199 #12 NEW cov: 12343 ft: 13988 corp: 10/55b lim: 10 exec/s: 0 rss: 74Mb L: 10/10 MS: 1 CopyPart- 00:07:56.199 [2024-10-09 00:15:26.627189] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:56.199 [2024-10-09 00:15:26.627215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.199 [2024-10-09 00:15:26.627302] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:56.199 [2024-10-09 00:15:26.627316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.199 [2024-10-09 00:15:26.627401] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000010a cdw11:00000000 00:07:56.199 [2024-10-09 00:15:26.627418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:56.199 [2024-10-09 00:15:26.627502] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000080 cdw11:00000000 00:07:56.199 [2024-10-09 00:15:26.627519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:56.199 [2024-10-09 00:15:26.627604] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000010a cdw11:00000000 00:07:56.199 [2024-10-09 00:15:26.627621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:56.199 NEW_FUNC[1/1]: 0x1bfc8d8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:658 00:07:56.199 #13 NEW cov: 12366 ft: 14021 corp: 11/65b lim: 10 exec/s: 0 rss: 74Mb L: 10/10 MS: 1 ChangeBit- 00:07:56.199 [2024-10-09 00:15:26.696653] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00008383 cdw11:00000000 00:07:56.200 [2024-10-09 00:15:26.696678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.200 [2024-10-09 00:15:26.696767] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00008387 cdw11:00000000 00:07:56.200 [2024-10-09 00:15:26.696781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.200 #14 NEW cov: 12366 ft: 14069 corp: 12/70b lim: 10 exec/s: 0 rss: 74Mb L: 5/10 MS: 1 ChangeBit- 00:07:56.200 [2024-10-09 00:15:26.746798] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00009600 cdw11:00000000 00:07:56.200 [2024-10-09 00:15:26.746829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.200 [2024-10-09 00:15:26.746929] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:56.200 [2024-10-09 00:15:26.746947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.200 #15 NEW cov: 12366 ft: 14094 corp: 13/75b lim: 10 exec/s: 15 rss: 74Mb L: 5/10 MS: 1 CMP- DE: "\226\000\000\000"- 00:07:56.200 [2024-10-09 00:15:26.817868] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:56.200 [2024-10-09 00:15:26.817895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.200 [2024-10-09 00:15:26.817993] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 00:07:56.200 [2024-10-09 00:15:26.818008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.200 [2024-10-09 00:15:26.818101] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00006a6a cdw11:00000000 00:07:56.200 [2024-10-09 00:15:26.818117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:56.200 [2024-10-09 00:15:26.818215] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00006a0a cdw11:00000000 00:07:56.200 [2024-10-09 00:15:26.818233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:56.459 #16 NEW cov: 12366 ft: 14132 corp: 14/83b lim: 10 exec/s: 16 rss: 74Mb L: 8/10 MS: 1 InsertRepeatedBytes- 00:07:56.459 [2024-10-09 00:15:26.867595] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00008383 cdw11:00000000 00:07:56.459 [2024-10-09 00:15:26.867620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.459 [2024-10-09 00:15:26.867713] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:000083fc cdw11:00000000 00:07:56.459 [2024-10-09 00:15:26.867728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.459 [2024-10-09 00:15:26.867821] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000870a cdw11:00000000 00:07:56.459 [2024-10-09 00:15:26.867839] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:56.459 #17 NEW cov: 12366 ft: 14319 corp: 15/89b lim: 10 exec/s: 17 rss: 74Mb L: 6/10 MS: 1 InsertByte- 00:07:56.459 [2024-10-09 00:15:26.918159] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00008383 cdw11:00000000 00:07:56.459 [2024-10-09 00:15:26.918185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.459 [2024-10-09 00:15:26.918278] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00008383 cdw11:00000000 00:07:56.459 [2024-10-09 00:15:26.918293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.459 [2024-10-09 00:15:26.918380] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:56.459 [2024-10-09 00:15:26.918397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:56.459 [2024-10-09 00:15:26.918488] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000001 cdw11:00000000 00:07:56.459 [2024-10-09 00:15:26.918504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:56.459 #18 NEW cov: 12366 ft: 14384 corp: 16/98b lim: 10 exec/s: 18 rss: 74Mb L: 9/10 MS: 1 CopyPart- 00:07:56.459 [2024-10-09 00:15:26.988621] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:56.459 [2024-10-09 00:15:26.988650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.459 [2024-10-09 00:15:26.988738] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:56.459 [2024-10-09 00:15:26.988754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.459 [2024-10-09 00:15:26.988848] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000010a cdw11:00000000 00:07:56.459 [2024-10-09 00:15:26.988866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:56.459 [2024-10-09 00:15:26.988961] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000d580 cdw11:00000000 00:07:56.459 [2024-10-09 00:15:26.988977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:56.459 [2024-10-09 00:15:26.989058] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000010a cdw11:00000000 00:07:56.459 [2024-10-09 00:15:26.989074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:56.459 #19 NEW cov: 12366 ft: 14431 corp: 17/108b lim: 10 exec/s: 19 rss: 74Mb L: 10/10 MS: 1 ChangeByte- 00:07:56.459 [2024-10-09 00:15:27.058457] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00008383 cdw11:00000000 00:07:56.459 [2024-10-09 00:15:27.058483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.459 [2024-10-09 00:15:27.058565] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:000083fc cdw11:00000000 00:07:56.459 [2024-10-09 00:15:27.058582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.459 [2024-10-09 00:15:27.058665] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00005b0a cdw11:00000000 00:07:56.459 [2024-10-09 00:15:27.058681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:56.718 #20 NEW cov: 12366 ft: 14438 corp: 18/114b lim: 10 exec/s: 20 rss: 74Mb L: 6/10 MS: 1 ChangeByte- 00:07:56.718 [2024-10-09 00:15:27.128910] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00008383 cdw11:00000000 00:07:56.718 [2024-10-09 00:15:27.128940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.718 [2024-10-09 00:15:27.129037] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:000083fc cdw11:00000000 00:07:56.718 [2024-10-09 00:15:27.129056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.718 [2024-10-09 00:15:27.129146] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00005b0a cdw11:00000000 00:07:56.718 [2024-10-09 00:15:27.129171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:56.718 [2024-10-09 00:15:27.129265] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:56.718 [2024-10-09 00:15:27.129286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:56.718 #21 NEW cov: 12366 ft: 14442 corp: 19/123b lim: 10 exec/s: 21 rss: 74Mb L: 9/10 MS: 1 InsertRepeatedBytes- 00:07:56.718 [2024-10-09 00:15:27.199247] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00004183 cdw11:00000000 00:07:56.718 [2024-10-09 00:15:27.199274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.718 [2024-10-09 00:15:27.199366] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00008383 cdw11:00000000 00:07:56.718 [2024-10-09 00:15:27.199383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.718 [2024-10-09 00:15:27.199472] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:56.718 [2024-10-09 00:15:27.199488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:56.718 [2024-10-09 00:15:27.199573] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000001 cdw11:00000000 00:07:56.718 [2024-10-09 00:15:27.199590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:56.718 #22 NEW cov: 12366 ft: 14474 corp: 20/132b lim: 10 exec/s: 22 rss: 75Mb L: 9/10 MS: 1 ChangeByte- 00:07:56.718 [2024-10-09 00:15:27.269433] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00008383 cdw11:00000000 00:07:56.718 [2024-10-09 00:15:27.269460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.718 [2024-10-09 00:15:27.269558] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00008383 cdw11:00000000 00:07:56.718 [2024-10-09 00:15:27.269575] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.718 [2024-10-09 00:15:27.269661] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000100 cdw11:00000000 00:07:56.718 [2024-10-09 00:15:27.269677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:56.718 [2024-10-09 00:15:27.269759] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000001 cdw11:00000000 00:07:56.718 [2024-10-09 00:15:27.269777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:56.718 #23 NEW cov: 12366 ft: 14475 corp: 21/141b lim: 10 exec/s: 23 rss: 75Mb L: 9/10 MS: 1 ChangeBit- 00:07:56.718 [2024-10-09 00:15:27.319149] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:56.718 [2024-10-09 00:15:27.319175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.718 [2024-10-09 00:15:27.319262] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 00:07:56.718 [2024-10-09 00:15:27.319290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.718 #24 NEW cov: 12366 ft: 14489 corp: 22/146b lim: 10 exec/s: 24 rss: 75Mb L: 5/10 MS: 1 PersAutoDict- DE: "\000\000\000\001"- 00:07:56.977 [2024-10-09 00:15:27.369656] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000dd83 cdw11:00000000 00:07:56.977 [2024-10-09 00:15:27.369686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.977 [2024-10-09 00:15:27.369780] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:000083fc cdw11:00000000 00:07:56.977 [2024-10-09 00:15:27.369796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.977 [2024-10-09 00:15:27.369888] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000870a cdw11:00000000 00:07:56.977 [2024-10-09 00:15:27.369906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:56.977 #25 NEW cov: 12366 ft: 14503 corp: 23/152b lim: 10 exec/s: 25 rss: 75Mb L: 6/10 MS: 1 ChangeByte- 00:07:56.977 [2024-10-09 00:15:27.419517] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:56.977 [2024-10-09 00:15:27.419543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.977 [2024-10-09 00:15:27.419625] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:56.977 [2024-10-09 00:15:27.419641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.977 [2024-10-09 00:15:27.470255] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:56.977 [2024-10-09 00:15:27.470281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.977 [2024-10-09 00:15:27.470374] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:56.977 [2024-10-09 00:15:27.470389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.977 [2024-10-09 00:15:27.470481] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000100 cdw11:00000000 00:07:56.977 [2024-10-09 00:15:27.470499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:56.977 [2024-10-09 00:15:27.470585] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:56.977 [2024-10-09 00:15:27.470600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:56.977 #27 NEW cov: 12366 ft: 14553 corp: 24/161b lim: 10 exec/s: 27 rss: 75Mb L: 9/10 MS: 2 ShuffleBytes-CrossOver- 00:07:56.977 [2024-10-09 00:15:27.520492] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00008383 cdw11:00000000 00:07:56.977 [2024-10-09 00:15:27.520518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.977 [2024-10-09 00:15:27.520601] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00008300 cdw11:00000000 00:07:56.977 [2024-10-09 00:15:27.520617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.977 [2024-10-09 00:15:27.520704] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:56.977 [2024-10-09 00:15:27.520720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:56.977 [2024-10-09 00:15:27.520807] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000101 cdw11:00000000 00:07:56.977 [2024-10-09 00:15:27.520826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:56.977 #28 NEW cov: 12366 ft: 14566 corp: 25/170b lim: 10 exec/s: 28 rss: 75Mb L: 9/10 MS: 1 PersAutoDict- DE: "\000\000\000\001"- 00:07:56.977 [2024-10-09 00:15:27.570702] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00008383 cdw11:00000000 00:07:56.977 [2024-10-09 00:15:27.570728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.977 [2024-10-09 00:15:27.570829] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00008300 cdw11:00000000 00:07:56.977 [2024-10-09 00:15:27.570845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.977 [2024-10-09 00:15:27.570938] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00008383 cdw11:00000000 00:07:56.977 [2024-10-09 00:15:27.570956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:56.977 [2024-10-09 00:15:27.571052] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000101 cdw11:00000000 00:07:56.977 [2024-10-09 00:15:27.571069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:57.236 #29 NEW cov: 12366 ft: 14569 corp: 26/179b lim: 10 exec/s: 29 rss: 75Mb L: 9/10 MS: 1 CopyPart- 00:07:57.236 [2024-10-09 00:15:27.641436] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:57.236 [2024-10-09 00:15:27.641462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.236 [2024-10-09 00:15:27.641552] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:57.236 [2024-10-09 00:15:27.641569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:57.236 [2024-10-09 00:15:27.641657] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000183 cdw11:00000000 00:07:57.236 [2024-10-09 00:15:27.641672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:57.236 [2024-10-09 00:15:27.641760] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000080 cdw11:00000000 00:07:57.236 [2024-10-09 00:15:27.641778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:57.236 [2024-10-09 00:15:27.641876] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000010a cdw11:00000000 00:07:57.236 [2024-10-09 00:15:27.641892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:57.236 #30 NEW cov: 12366 ft: 14587 corp: 27/189b lim: 10 exec/s: 30 rss: 75Mb L: 10/10 MS: 1 CrossOver- 00:07:57.236 [2024-10-09 00:15:27.691468] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00008383 cdw11:00000000 00:07:57.236 [2024-10-09 00:15:27.691495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.237 [2024-10-09 00:15:27.691586] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00008383 cdw11:00000000 00:07:57.237 [2024-10-09 00:15:27.691603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:57.237 [2024-10-09 00:15:27.691687] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00003a01 cdw11:00000000 00:07:57.237 [2024-10-09 00:15:27.691704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:57.237 [2024-10-09 00:15:27.691796] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:57.237 [2024-10-09 00:15:27.691815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:57.237 [2024-10-09 00:15:27.691897] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:00000183 cdw11:00000000 00:07:57.237 [2024-10-09 00:15:27.691913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:57.237 #31 NEW cov: 12366 ft: 14644 corp: 28/199b lim: 10 exec/s: 31 rss: 75Mb L: 10/10 MS: 1 InsertByte- 00:07:57.237 [2024-10-09 00:15:27.761541] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:57.237 [2024-10-09 00:15:27.761566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.237 [2024-10-09 00:15:27.761651] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000006a cdw11:00000000 00:07:57.237 [2024-10-09 00:15:27.761667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:57.237 [2024-10-09 00:15:27.761740] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000016a cdw11:00000000 00:07:57.237 [2024-10-09 00:15:27.761756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:57.237 [2024-10-09 00:15:27.761848] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00006a6a cdw11:00000000 00:07:57.237 [2024-10-09 00:15:27.761864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:57.237 #32 pulse cov: 12366 ft: 14661 corp: 28/199b lim: 10 exec/s: 16 rss: 75Mb 00:07:57.237 #32 NEW cov: 12366 ft: 14661 corp: 29/208b lim: 10 exec/s: 16 rss: 75Mb L: 9/10 MS: 1 CopyPart- 00:07:57.237 #32 DONE cov: 12366 ft: 14661 corp: 29/208b lim: 10 exec/s: 16 rss: 75Mb 00:07:57.237 ###### Recommended dictionary. ###### 00:07:57.237 "\000\000\000\001" # Uses: 4 00:07:57.237 "\226\000\000\000" # Uses: 0 00:07:57.237 ###### End of recommended dictionary. ###### 00:07:57.237 Done 32 runs in 2 second(s) 00:07:57.496 00:15:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_7.conf /var/tmp/suppress_nvmf_fuzz 00:07:57.496 00:15:27 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:57.496 00:15:27 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:57.496 00:15:27 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 8 1 0x1 00:07:57.496 00:15:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=8 00:07:57.496 00:15:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:57.496 00:15:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:57.496 00:15:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:07:57.496 00:15:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_8.conf 00:07:57.496 00:15:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:57.496 00:15:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:57.496 00:15:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 8 00:07:57.496 00:15:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4408 00:07:57.496 00:15:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:07:57.496 00:15:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' 00:07:57.496 00:15:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4408"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:57.496 00:15:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:57.496 00:15:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:57.496 00:15:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' -c /tmp/fuzz_json_8.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 -Z 8 00:07:57.496 [2024-10-09 00:15:27.979681] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:07:57.496 [2024-10-09 00:15:27.979768] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3888204 ] 00:07:57.755 [2024-10-09 00:15:28.169633] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.755 [2024-10-09 00:15:28.242487] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.755 [2024-10-09 00:15:28.301732] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:57.755 [2024-10-09 00:15:28.317973] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4408 *** 00:07:57.755 INFO: Running with entropic power schedule (0xFF, 100). 00:07:57.755 INFO: Seed: 3594115963 00:07:57.755 INFO: Loaded 1 modules (384346 inline 8-bit counters): 384346 [0x2bec84c, 0x2c4a5a6), 00:07:57.755 INFO: Loaded 1 PC tables (384346 PCs): 384346 [0x2c4a5a8,0x3227b48), 00:07:57.755 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:07:57.755 INFO: A corpus is not provided, starting from an empty corpus 00:07:58.013 [2024-10-09 00:15:28.395621] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.013 [2024-10-09 00:15:28.395669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:58.013 #2 INITED cov: 12143 ft: 12139 corp: 1/1b exec/s: 0 rss: 72Mb 00:07:58.013 [2024-10-09 00:15:28.445618] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.013 [2024-10-09 00:15:28.445650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:58.013 #3 NEW cov: 12280 ft: 12598 corp: 2/2b lim: 5 exec/s: 0 rss: 72Mb L: 1/1 MS: 1 ChangeBit- 00:07:58.013 [2024-10-09 00:15:28.516052] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.013 [2024-10-09 00:15:28.516080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:58.013 #4 NEW cov: 12286 ft: 12786 corp: 3/3b lim: 5 exec/s: 0 rss: 73Mb L: 1/1 MS: 1 ChangeBinInt- 00:07:58.013 [2024-10-09 00:15:28.586499] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.013 [2024-10-09 00:15:28.586529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:58.013 #5 NEW cov: 12371 ft: 13006 corp: 4/4b lim: 5 exec/s: 0 rss: 73Mb L: 1/1 MS: 1 ChangeBit- 00:07:58.013 [2024-10-09 00:15:28.638083] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.013 [2024-10-09 00:15:28.638109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:58.013 [2024-10-09 00:15:28.638214] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.013 [2024-10-09 00:15:28.638237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:58.013 [2024-10-09 00:15:28.638321] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.013 [2024-10-09 00:15:28.638338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:58.013 [2024-10-09 00:15:28.638428] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.013 [2024-10-09 00:15:28.638444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:58.013 [2024-10-09 00:15:28.638537] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.013 [2024-10-09 00:15:28.638555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:58.271 #6 NEW cov: 12371 ft: 13925 corp: 5/9b lim: 5 exec/s: 0 rss: 73Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:07:58.271 [2024-10-09 00:15:28.698006] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.271 [2024-10-09 00:15:28.698032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:58.271 [2024-10-09 00:15:28.698121] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.271 [2024-10-09 00:15:28.698138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:58.271 [2024-10-09 00:15:28.698230] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.271 [2024-10-09 00:15:28.698247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:58.271 [2024-10-09 00:15:28.698339] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.271 [2024-10-09 00:15:28.698355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:58.271 #7 NEW cov: 12371 ft: 14153 corp: 6/13b lim: 5 exec/s: 0 rss: 73Mb L: 4/5 MS: 1 EraseBytes- 00:07:58.271 [2024-10-09 00:15:28.768942] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.271 [2024-10-09 00:15:28.768968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:58.271 [2024-10-09 00:15:28.769055] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.271 [2024-10-09 00:15:28.769081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:58.271 [2024-10-09 00:15:28.769172] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.271 [2024-10-09 00:15:28.769189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:58.272 [2024-10-09 00:15:28.769280] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.272 [2024-10-09 00:15:28.769297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:58.272 [2024-10-09 00:15:28.769397] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.272 [2024-10-09 00:15:28.769412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:58.272 #8 NEW cov: 12371 ft: 14197 corp: 7/18b lim: 5 exec/s: 0 rss: 73Mb L: 5/5 MS: 1 CopyPart- 00:07:58.272 [2024-10-09 00:15:28.817929] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.272 [2024-10-09 00:15:28.817955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:58.272 #9 NEW cov: 12371 ft: 14302 corp: 8/19b lim: 5 exec/s: 0 rss: 73Mb L: 1/5 MS: 1 ShuffleBytes- 00:07:58.272 [2024-10-09 00:15:28.868152] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.272 [2024-10-09 00:15:28.868177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:58.529 #10 NEW cov: 12371 ft: 14315 corp: 9/20b lim: 5 exec/s: 0 rss: 73Mb L: 1/5 MS: 1 ChangeBit- 00:07:58.529 [2024-10-09 00:15:28.938424] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.529 [2024-10-09 00:15:28.938452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:58.529 #11 NEW cov: 12371 ft: 14347 corp: 10/21b lim: 5 exec/s: 0 rss: 73Mb L: 1/5 MS: 1 ChangeBinInt- 00:07:58.529 [2024-10-09 00:15:29.008686] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.529 [2024-10-09 00:15:29.008714] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:58.529 #12 NEW cov: 12371 ft: 14398 corp: 11/22b lim: 5 exec/s: 0 rss: 73Mb L: 1/5 MS: 1 CopyPart- 00:07:58.529 [2024-10-09 00:15:29.080072] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.529 [2024-10-09 00:15:29.080098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:58.529 [2024-10-09 00:15:29.080188] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.529 [2024-10-09 00:15:29.080203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:58.529 [2024-10-09 00:15:29.080296] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.529 [2024-10-09 00:15:29.080311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:58.529 [2024-10-09 00:15:29.080402] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.530 [2024-10-09 00:15:29.080418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:58.530 #13 NEW cov: 12371 ft: 14471 corp: 12/26b lim: 5 exec/s: 0 rss: 73Mb L: 4/5 MS: 1 CopyPart- 00:07:58.530 [2024-10-09 00:15:29.150893] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.530 [2024-10-09 00:15:29.150921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:58.530 [2024-10-09 00:15:29.151016] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.530 [2024-10-09 00:15:29.151035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:58.530 [2024-10-09 00:15:29.151126] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.530 [2024-10-09 00:15:29.151142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:58.530 [2024-10-09 00:15:29.151234] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.530 [2024-10-09 00:15:29.151251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:58.530 [2024-10-09 00:15:29.151345] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.530 [2024-10-09 00:15:29.151362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:58.788 #14 NEW cov: 12371 ft: 14502 corp: 13/31b lim: 5 exec/s: 0 rss: 73Mb L: 5/5 MS: 1 ChangeByte- 00:07:58.788 [2024-10-09 00:15:29.219798] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.788 [2024-10-09 00:15:29.219835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:58.788 #15 NEW cov: 12371 ft: 14609 corp: 14/32b lim: 5 exec/s: 0 rss: 73Mb L: 1/5 MS: 1 ChangeByte- 00:07:58.788 [2024-10-09 00:15:29.269899] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.788 [2024-10-09 00:15:29.269927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:59.046 NEW_FUNC[1/1]: 0x1bfc8d8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:658 00:07:59.046 #16 NEW cov: 12394 ft: 14635 corp: 15/33b lim: 5 exec/s: 16 rss: 74Mb L: 1/5 MS: 1 ShuffleBytes- 00:07:59.046 [2024-10-09 00:15:29.590860] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.046 [2024-10-09 00:15:29.590904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:59.046 #17 NEW cov: 12394 ft: 14720 corp: 16/34b lim: 5 exec/s: 17 rss: 74Mb L: 1/5 MS: 1 ChangeByte- 00:07:59.046 [2024-10-09 00:15:29.661059] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.046 [2024-10-09 00:15:29.661090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:59.303 #18 NEW cov: 12394 ft: 14741 corp: 17/35b lim: 5 exec/s: 18 rss: 74Mb L: 1/5 MS: 1 ChangeBit- 00:07:59.303 [2024-10-09 00:15:29.711409] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.303 [2024-10-09 00:15:29.711440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:59.303 #19 NEW cov: 12394 ft: 14757 corp: 18/36b lim: 5 exec/s: 19 rss: 75Mb L: 1/5 MS: 1 ChangeBit- 00:07:59.303 [2024-10-09 00:15:29.783129] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.303 [2024-10-09 00:15:29.783161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:59.303 [2024-10-09 00:15:29.783267] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.303 [2024-10-09 00:15:29.783282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:59.303 [2024-10-09 00:15:29.783376] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.303 [2024-10-09 00:15:29.783392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:59.303 [2024-10-09 00:15:29.783482] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.303 [2024-10-09 00:15:29.783499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:59.303 [2024-10-09 00:15:29.783595] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.303 [2024-10-09 00:15:29.783611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:59.303 #20 NEW cov: 12394 ft: 14771 corp: 19/41b lim: 5 exec/s: 20 rss: 75Mb L: 5/5 MS: 1 ShuffleBytes- 00:07:59.303 [2024-10-09 00:15:29.833714] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.303 [2024-10-09 00:15:29.833742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:59.303 [2024-10-09 00:15:29.833840] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.303 [2024-10-09 00:15:29.833856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:59.303 [2024-10-09 00:15:29.833951] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.303 [2024-10-09 00:15:29.833970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:59.303 [2024-10-09 00:15:29.834064] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.303 [2024-10-09 00:15:29.834082] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:59.303 [2024-10-09 00:15:29.834177] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.303 [2024-10-09 00:15:29.834194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:59.303 #21 NEW cov: 12394 ft: 14825 corp: 20/46b lim: 5 exec/s: 21 rss: 75Mb L: 5/5 MS: 1 ChangeBinInt- 00:07:59.303 [2024-10-09 00:15:29.912415] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.303 [2024-10-09 00:15:29.912444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:59.303 #22 NEW cov: 12394 ft: 14874 corp: 21/47b lim: 5 exec/s: 22 rss: 75Mb L: 1/5 MS: 1 ChangeBit- 00:07:59.561 [2024-10-09 00:15:29.962668] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.561 [2024-10-09 00:15:29.962703] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:59.561 #23 NEW cov: 12394 ft: 14895 corp: 22/48b lim: 5 exec/s: 23 rss: 75Mb L: 1/5 MS: 1 ChangeBit- 00:07:59.561 [2024-10-09 00:15:30.032888] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.561 [2024-10-09 00:15:30.032919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:59.561 #24 NEW cov: 12394 ft: 14982 corp: 23/49b lim: 5 exec/s: 24 rss: 75Mb L: 1/5 MS: 1 ChangeBit- 00:07:59.561 [2024-10-09 00:15:30.103527] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.561 [2024-10-09 00:15:30.103554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:59.561 [2024-10-09 00:15:30.103658] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.561 [2024-10-09 00:15:30.103675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:59.561 #25 NEW cov: 12394 ft: 15195 corp: 24/51b lim: 5 exec/s: 25 rss: 75Mb L: 2/5 MS: 1 InsertByte- 00:07:59.561 [2024-10-09 00:15:30.153683] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.561 [2024-10-09 00:15:30.153710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:59.561 [2024-10-09 00:15:30.153803] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.561 [2024-10-09 00:15:30.153824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:59.819 #26 NEW cov: 12394 ft: 15208 corp: 25/53b lim: 5 exec/s: 26 rss: 75Mb L: 2/5 MS: 1 CrossOver- 00:07:59.819 [2024-10-09 00:15:30.224968] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.819 [2024-10-09 00:15:30.224994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:59.819 [2024-10-09 00:15:30.225106] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.819 [2024-10-09 00:15:30.225123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:59.819 [2024-10-09 00:15:30.225215] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.819 [2024-10-09 00:15:30.225233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:59.819 [2024-10-09 00:15:30.225322] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.819 [2024-10-09 00:15:30.225339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:59.819 [2024-10-09 00:15:30.225430] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.819 [2024-10-09 00:15:30.225453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:59.819 #27 NEW cov: 12394 ft: 15228 corp: 26/58b lim: 5 exec/s: 27 rss: 75Mb L: 5/5 MS: 1 CopyPart- 00:07:59.819 [2024-10-09 00:15:30.294170] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.819 [2024-10-09 00:15:30.294196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:59.819 [2024-10-09 00:15:30.294298] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.819 [2024-10-09 00:15:30.294314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:59.819 #28 NEW cov: 12394 ft: 15240 corp: 27/60b lim: 5 exec/s: 28 rss: 75Mb L: 2/5 MS: 1 InsertByte- 00:07:59.819 [2024-10-09 00:15:30.365515] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.819 [2024-10-09 00:15:30.365544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:59.819 [2024-10-09 00:15:30.365639] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.819 [2024-10-09 00:15:30.365658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:59.819 [2024-10-09 00:15:30.365750] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.819 [2024-10-09 00:15:30.365769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:59.819 [2024-10-09 00:15:30.365867] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.819 [2024-10-09 00:15:30.365886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:59.819 [2024-10-09 00:15:30.365974] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.819 [2024-10-09 00:15:30.365993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:59.819 #29 NEW cov: 12394 ft: 15267 corp: 28/65b lim: 5 exec/s: 14 rss: 75Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:07:59.819 #29 DONE cov: 12394 ft: 15267 corp: 28/65b lim: 5 exec/s: 14 rss: 75Mb 00:07:59.819 Done 29 runs in 2 second(s) 00:08:00.078 00:15:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_8.conf /var/tmp/suppress_nvmf_fuzz 00:08:00.078 00:15:30 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:00.078 00:15:30 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:00.078 00:15:30 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 9 1 0x1 00:08:00.078 00:15:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=9 00:08:00.078 00:15:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:00.078 00:15:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:00.078 00:15:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:08:00.078 00:15:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_9.conf 00:08:00.078 00:15:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:00.078 00:15:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:00.078 00:15:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 9 00:08:00.078 00:15:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4409 00:08:00.078 00:15:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:08:00.078 00:15:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' 00:08:00.078 00:15:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4409"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:00.078 00:15:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:00.078 00:15:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:00.078 00:15:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' -c /tmp/fuzz_json_9.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 -Z 9 00:08:00.078 [2024-10-09 00:15:30.587914] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:08:00.078 [2024-10-09 00:15:30.587982] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3888570 ] 00:08:00.336 [2024-10-09 00:15:30.772772] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.336 [2024-10-09 00:15:30.845577] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.336 [2024-10-09 00:15:30.904873] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:00.336 [2024-10-09 00:15:30.921108] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4409 *** 00:08:00.336 INFO: Running with entropic power schedule (0xFF, 100). 00:08:00.336 INFO: Seed: 1900140648 00:08:00.336 INFO: Loaded 1 modules (384346 inline 8-bit counters): 384346 [0x2bec84c, 0x2c4a5a6), 00:08:00.336 INFO: Loaded 1 PC tables (384346 PCs): 384346 [0x2c4a5a8,0x3227b48), 00:08:00.336 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:08:00.336 INFO: A corpus is not provided, starting from an empty corpus 00:08:00.336 [2024-10-09 00:15:30.970401] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.336 [2024-10-09 00:15:30.970430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.595 #2 INITED cov: 12168 ft: 12153 corp: 1/1b exec/s: 0 rss: 72Mb 00:08:00.595 [2024-10-09 00:15:31.010375] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.595 [2024-10-09 00:15:31.010402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.595 #3 NEW cov: 12281 ft: 12604 corp: 2/2b lim: 5 exec/s: 0 rss: 72Mb L: 1/1 MS: 1 ShuffleBytes- 00:08:00.595 [2024-10-09 00:15:31.070755] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.595 [2024-10-09 00:15:31.070781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.595 [2024-10-09 00:15:31.070841] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.595 [2024-10-09 00:15:31.070856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:00.595 #4 NEW cov: 12287 ft: 13687 corp: 3/4b lim: 5 exec/s: 0 rss: 73Mb L: 2/2 MS: 1 InsertByte- 00:08:00.595 [2024-10-09 00:15:31.131351] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.595 [2024-10-09 00:15:31.131377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.595 [2024-10-09 00:15:31.131430] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.595 [2024-10-09 00:15:31.131444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:00.595 [2024-10-09 00:15:31.131495] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.595 [2024-10-09 00:15:31.131509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:00.595 [2024-10-09 00:15:31.131561] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.595 [2024-10-09 00:15:31.131574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:00.595 [2024-10-09 00:15:31.131640] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.595 [2024-10-09 00:15:31.131654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:00.595 #5 NEW cov: 12372 ft: 14317 corp: 4/9b lim: 5 exec/s: 0 rss: 73Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:08:00.595 [2024-10-09 00:15:31.171125] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.595 [2024-10-09 00:15:31.171151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.595 [2024-10-09 00:15:31.171205] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.595 [2024-10-09 00:15:31.171220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:00.595 [2024-10-09 00:15:31.171271] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.595 [2024-10-09 00:15:31.171285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:00.595 #6 NEW cov: 12372 ft: 14598 corp: 5/12b lim: 5 exec/s: 0 rss: 73Mb L: 3/5 MS: 1 InsertByte- 00:08:00.853 [2024-10-09 00:15:31.231601] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.853 [2024-10-09 00:15:31.231626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.853 [2024-10-09 00:15:31.231678] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.853 [2024-10-09 00:15:31.231693] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:00.853 [2024-10-09 00:15:31.231741] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.853 [2024-10-09 00:15:31.231755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:00.854 [2024-10-09 00:15:31.231804] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.854 [2024-10-09 00:15:31.231826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:00.854 [2024-10-09 00:15:31.231875] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.854 [2024-10-09 00:15:31.231889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:00.854 #7 NEW cov: 12372 ft: 14692 corp: 6/17b lim: 5 exec/s: 0 rss: 73Mb L: 5/5 MS: 1 ChangeBinInt- 00:08:00.854 [2024-10-09 00:15:31.291129] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.854 [2024-10-09 00:15:31.291154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.854 #8 NEW cov: 12372 ft: 14762 corp: 7/18b lim: 5 exec/s: 0 rss: 73Mb L: 1/5 MS: 1 ChangeBit- 00:08:00.854 [2024-10-09 00:15:31.331824] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.854 [2024-10-09 00:15:31.331849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.854 [2024-10-09 00:15:31.331902] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.854 [2024-10-09 00:15:31.331916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:00.854 [2024-10-09 00:15:31.331968] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.854 [2024-10-09 00:15:31.331982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:00.854 [2024-10-09 00:15:31.332033] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.854 [2024-10-09 00:15:31.332047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:00.854 [2024-10-09 00:15:31.332097] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.854 [2024-10-09 00:15:31.332110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:00.854 #9 NEW cov: 12372 ft: 14791 corp: 8/23b lim: 5 exec/s: 0 rss: 73Mb L: 5/5 MS: 1 CopyPart- 00:08:00.854 [2024-10-09 00:15:31.371368] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.854 [2024-10-09 00:15:31.371392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.854 #10 NEW cov: 12372 ft: 14828 corp: 9/24b lim: 5 exec/s: 0 rss: 73Mb L: 1/5 MS: 1 EraseBytes- 00:08:00.854 [2024-10-09 00:15:31.411926] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.854 [2024-10-09 00:15:31.411951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.854 [2024-10-09 00:15:31.412004] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.854 [2024-10-09 00:15:31.412021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:00.854 [2024-10-09 00:15:31.412074] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.854 [2024-10-09 00:15:31.412087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:00.854 [2024-10-09 00:15:31.412136] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.854 [2024-10-09 00:15:31.412166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:00.854 #11 NEW cov: 12372 ft: 14888 corp: 10/28b lim: 5 exec/s: 0 rss: 73Mb L: 4/5 MS: 1 InsertRepeatedBytes- 00:08:00.854 [2024-10-09 00:15:31.452332] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.854 [2024-10-09 00:15:31.452355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.854 [2024-10-09 00:15:31.452422] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.854 [2024-10-09 00:15:31.452436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:00.854 [2024-10-09 00:15:31.452488] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.854 [2024-10-09 00:15:31.452501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:00.854 [2024-10-09 00:15:31.452552] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.854 [2024-10-09 00:15:31.452565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:01.112 #12 NEW cov: 12372 ft: 14900 corp: 11/32b lim: 5 exec/s: 0 rss: 73Mb L: 4/5 MS: 1 InsertByte- 00:08:01.112 [2024-10-09 00:15:31.512071] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.112 [2024-10-09 00:15:31.512096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:01.112 [2024-10-09 00:15:31.512147] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.112 [2024-10-09 00:15:31.512161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:01.112 [2024-10-09 00:15:31.512213] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.112 [2024-10-09 00:15:31.512227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:01.112 #13 NEW cov: 12372 ft: 14974 corp: 12/35b lim: 5 exec/s: 0 rss: 73Mb L: 3/5 MS: 1 InsertByte- 00:08:01.112 [2024-10-09 00:15:31.552220] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.112 [2024-10-09 00:15:31.552243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:01.112 [2024-10-09 00:15:31.552313] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.112 [2024-10-09 00:15:31.552329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:01.112 [2024-10-09 00:15:31.552379] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.112 [2024-10-09 00:15:31.552392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:01.112 #14 NEW cov: 12372 ft: 14996 corp: 13/38b lim: 5 exec/s: 0 rss: 73Mb L: 3/5 MS: 1 EraseBytes- 00:08:01.112 [2024-10-09 00:15:31.592003] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.112 [2024-10-09 00:15:31.592027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:01.112 #15 NEW cov: 12372 ft: 15033 corp: 14/39b lim: 5 exec/s: 0 rss: 73Mb L: 1/5 MS: 1 ChangeBit- 00:08:01.112 [2024-10-09 00:15:31.652497] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.112 [2024-10-09 00:15:31.652522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:01.112 [2024-10-09 00:15:31.652574] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.112 [2024-10-09 00:15:31.652588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:01.112 [2024-10-09 00:15:31.652640] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.112 [2024-10-09 00:15:31.652653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:01.112 #16 NEW cov: 12372 ft: 15082 corp: 15/42b lim: 5 exec/s: 0 rss: 73Mb L: 3/5 MS: 1 EraseBytes- 00:08:01.112 [2024-10-09 00:15:31.712666] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.112 [2024-10-09 00:15:31.712692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:01.112 [2024-10-09 00:15:31.712744] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.112 [2024-10-09 00:15:31.712758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:01.113 [2024-10-09 00:15:31.712810] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.113 [2024-10-09 00:15:31.712828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:01.369 #17 NEW cov: 12372 ft: 15095 corp: 16/45b lim: 5 exec/s: 0 rss: 73Mb L: 3/5 MS: 1 CrossOver- 00:08:01.369 [2024-10-09 00:15:31.772534] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.369 [2024-10-09 00:15:31.772559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:01.369 #18 NEW cov: 12372 ft: 15122 corp: 17/46b lim: 5 exec/s: 0 rss: 73Mb L: 1/5 MS: 1 CopyPart- 00:08:01.369 [2024-10-09 00:15:31.812928] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.369 [2024-10-09 00:15:31.812953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:01.369 [2024-10-09 00:15:31.813008] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.369 [2024-10-09 00:15:31.813022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:01.369 [2024-10-09 00:15:31.813075] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.369 [2024-10-09 00:15:31.813089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:01.627 NEW_FUNC[1/1]: 0x1bfc8d8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:658 00:08:01.627 #19 NEW cov: 12395 ft: 15141 corp: 18/49b lim: 5 exec/s: 19 rss: 74Mb L: 3/5 MS: 1 ChangeBinInt- 00:08:01.627 [2024-10-09 00:15:32.134016] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.627 [2024-10-09 00:15:32.134057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:01.627 [2024-10-09 00:15:32.134116] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.627 [2024-10-09 00:15:32.134133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:01.627 [2024-10-09 00:15:32.134191] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.627 [2024-10-09 00:15:32.134207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:01.627 [2024-10-09 00:15:32.134265] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.627 [2024-10-09 00:15:32.134285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:01.627 #20 NEW cov: 12395 ft: 15204 corp: 19/53b lim: 5 exec/s: 20 rss: 74Mb L: 4/5 MS: 1 CrossOver- 00:08:01.627 [2024-10-09 00:15:32.194221] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.627 [2024-10-09 00:15:32.194248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:01.627 [2024-10-09 00:15:32.194303] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.627 [2024-10-09 00:15:32.194317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:01.627 [2024-10-09 00:15:32.194369] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.627 [2024-10-09 00:15:32.194384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:01.627 [2024-10-09 00:15:32.194434] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.627 [2024-10-09 00:15:32.194448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:01.627 [2024-10-09 00:15:32.194499] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.627 [2024-10-09 00:15:32.194517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:01.627 #21 NEW cov: 12395 ft: 15267 corp: 20/58b lim: 5 exec/s: 21 rss: 74Mb L: 5/5 MS: 1 ChangeByte- 00:08:01.627 [2024-10-09 00:15:32.253756] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.627 [2024-10-09 00:15:32.253781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:01.886 #22 NEW cov: 12395 ft: 15299 corp: 21/59b lim: 5 exec/s: 22 rss: 74Mb L: 1/5 MS: 1 ChangeByte- 00:08:01.886 [2024-10-09 00:15:32.314568] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.886 [2024-10-09 00:15:32.314593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:01.886 [2024-10-09 00:15:32.314646] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.886 [2024-10-09 00:15:32.314660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:01.886 [2024-10-09 00:15:32.314713] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.886 [2024-10-09 00:15:32.314727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:01.886 [2024-10-09 00:15:32.314778] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.886 [2024-10-09 00:15:32.314792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:01.886 [2024-10-09 00:15:32.314848] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.886 [2024-10-09 00:15:32.314862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:01.886 #23 NEW cov: 12395 ft: 15314 corp: 22/64b lim: 5 exec/s: 23 rss: 75Mb L: 5/5 MS: 1 CopyPart- 00:08:01.886 [2024-10-09 00:15:32.374543] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.886 [2024-10-09 00:15:32.374569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:01.886 [2024-10-09 00:15:32.374624] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.886 [2024-10-09 00:15:32.374638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:01.886 [2024-10-09 00:15:32.374691] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.886 [2024-10-09 00:15:32.374705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:01.886 [2024-10-09 00:15:32.374760] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.886 [2024-10-09 00:15:32.374773] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:01.886 #24 NEW cov: 12395 ft: 15325 corp: 23/68b lim: 5 exec/s: 24 rss: 75Mb L: 4/5 MS: 1 EraseBytes- 00:08:01.886 [2024-10-09 00:15:32.414678] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.886 [2024-10-09 00:15:32.414704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:01.887 [2024-10-09 00:15:32.414774] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.887 [2024-10-09 00:15:32.414788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:01.887 [2024-10-09 00:15:32.414852] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.887 [2024-10-09 00:15:32.414867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:01.887 [2024-10-09 00:15:32.414918] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.887 [2024-10-09 00:15:32.414939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:01.887 #25 NEW cov: 12395 ft: 15339 corp: 24/72b lim: 5 exec/s: 25 rss: 75Mb L: 4/5 MS: 1 CrossOver- 00:08:01.887 [2024-10-09 00:15:32.454790] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.887 [2024-10-09 00:15:32.454821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:01.887 [2024-10-09 00:15:32.454847] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.887 [2024-10-09 00:15:32.454862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:01.887 [2024-10-09 00:15:32.454915] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.887 [2024-10-09 00:15:32.454929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:01.887 [2024-10-09 00:15:32.454979] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.887 [2024-10-09 00:15:32.454992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:01.887 #26 NEW cov: 12395 ft: 15383 corp: 25/76b lim: 5 exec/s: 26 rss: 75Mb L: 4/5 MS: 1 ChangeByte- 00:08:01.887 [2024-10-09 00:15:32.514681] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.887 [2024-10-09 00:15:32.514708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:01.887 [2024-10-09 00:15:32.514761] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.887 [2024-10-09 00:15:32.514775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:02.145 #27 NEW cov: 12395 ft: 15402 corp: 26/78b lim: 5 exec/s: 27 rss: 75Mb L: 2/5 MS: 1 ChangeByte- 00:08:02.145 [2024-10-09 00:15:32.555220] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.145 [2024-10-09 00:15:32.555245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:02.145 [2024-10-09 00:15:32.555304] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.145 [2024-10-09 00:15:32.555317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:02.145 [2024-10-09 00:15:32.555368] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.145 [2024-10-09 00:15:32.555382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:02.145 [2024-10-09 00:15:32.555434] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.145 [2024-10-09 00:15:32.555449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:02.145 [2024-10-09 00:15:32.555499] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.145 [2024-10-09 00:15:32.555512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:02.145 #28 NEW cov: 12395 ft: 15413 corp: 27/83b lim: 5 exec/s: 28 rss: 75Mb L: 5/5 MS: 1 CopyPart- 00:08:02.145 [2024-10-09 00:15:32.595209] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.145 [2024-10-09 00:15:32.595234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:02.145 [2024-10-09 00:15:32.595288] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.145 [2024-10-09 00:15:32.595302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:02.145 [2024-10-09 00:15:32.595353] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.145 [2024-10-09 00:15:32.595367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:02.145 [2024-10-09 00:15:32.595418] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.145 [2024-10-09 00:15:32.595430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:02.145 #29 NEW cov: 12395 ft: 15437 corp: 28/87b lim: 5 exec/s: 29 rss: 75Mb L: 4/5 MS: 1 InsertByte- 00:08:02.145 [2024-10-09 00:15:32.655515] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.146 [2024-10-09 00:15:32.655540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:02.146 [2024-10-09 00:15:32.655594] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.146 [2024-10-09 00:15:32.655608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:02.146 [2024-10-09 00:15:32.655662] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.146 [2024-10-09 00:15:32.655676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:02.146 [2024-10-09 00:15:32.655729] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.146 [2024-10-09 00:15:32.655743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:02.146 [2024-10-09 00:15:32.655794] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.146 [2024-10-09 00:15:32.655808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:02.146 #30 NEW cov: 12395 ft: 15444 corp: 29/92b lim: 5 exec/s: 30 rss: 75Mb L: 5/5 MS: 1 CopyPart- 00:08:02.146 [2024-10-09 00:15:32.715514] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.146 [2024-10-09 00:15:32.715540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:02.146 [2024-10-09 00:15:32.715612] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.146 [2024-10-09 00:15:32.715627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:02.146 [2024-10-09 00:15:32.715681] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.146 [2024-10-09 00:15:32.715695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:02.146 [2024-10-09 00:15:32.715750] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.146 [2024-10-09 00:15:32.715763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:02.146 #31 NEW cov: 12395 ft: 15481 corp: 30/96b lim: 5 exec/s: 31 rss: 75Mb L: 4/5 MS: 1 ShuffleBytes- 00:08:02.146 [2024-10-09 00:15:32.755763] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.146 [2024-10-09 00:15:32.755788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:02.146 [2024-10-09 00:15:32.755844] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.146 [2024-10-09 00:15:32.755858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:02.146 [2024-10-09 00:15:32.755913] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.146 [2024-10-09 00:15:32.755927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:02.146 [2024-10-09 00:15:32.755981] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.146 [2024-10-09 00:15:32.755994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:02.146 [2024-10-09 00:15:32.756047] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.146 [2024-10-09 00:15:32.756061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:02.404 #32 NEW cov: 12395 ft: 15482 corp: 31/101b lim: 5 exec/s: 32 rss: 75Mb L: 5/5 MS: 1 ChangeBit- 00:08:02.404 [2024-10-09 00:15:32.815363] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.404 [2024-10-09 00:15:32.815388] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:02.405 #33 NEW cov: 12395 ft: 15499 corp: 32/102b lim: 5 exec/s: 33 rss: 75Mb L: 1/5 MS: 1 CopyPart- 00:08:02.405 [2024-10-09 00:15:32.855606] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.405 [2024-10-09 00:15:32.855631] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:02.405 [2024-10-09 00:15:32.855687] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.405 [2024-10-09 00:15:32.855701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:02.405 #34 NEW cov: 12395 ft: 15557 corp: 33/104b lim: 5 exec/s: 34 rss: 75Mb L: 2/5 MS: 1 ChangeBit- 00:08:02.405 [2024-10-09 00:15:32.896149] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.405 [2024-10-09 00:15:32.896173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:02.405 [2024-10-09 00:15:32.896230] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.405 [2024-10-09 00:15:32.896243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:02.405 [2024-10-09 00:15:32.896297] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.405 [2024-10-09 00:15:32.896311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:02.405 [2024-10-09 00:15:32.896363] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.405 [2024-10-09 00:15:32.896376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:02.405 [2024-10-09 00:15:32.896427] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.405 [2024-10-09 00:15:32.896440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:02.405 #35 NEW cov: 12395 ft: 15569 corp: 34/109b lim: 5 exec/s: 35 rss: 75Mb L: 5/5 MS: 1 ChangeByte- 00:08:02.405 [2024-10-09 00:15:32.936276] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.405 [2024-10-09 00:15:32.936302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:02.405 [2024-10-09 00:15:32.936357] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.405 [2024-10-09 00:15:32.936371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:02.405 [2024-10-09 00:15:32.936423] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.405 [2024-10-09 00:15:32.936441] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:02.405 [2024-10-09 00:15:32.936494] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.405 [2024-10-09 00:15:32.936507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:02.405 [2024-10-09 00:15:32.936559] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.405 [2024-10-09 00:15:32.936574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:02.405 #36 NEW cov: 12395 ft: 15581 corp: 35/114b lim: 5 exec/s: 18 rss: 75Mb L: 5/5 MS: 1 ChangeBit- 00:08:02.405 #36 DONE cov: 12395 ft: 15581 corp: 35/114b lim: 5 exec/s: 18 rss: 75Mb 00:08:02.405 Done 36 runs in 2 second(s) 00:08:02.664 00:15:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_9.conf /var/tmp/suppress_nvmf_fuzz 00:08:02.664 00:15:33 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:02.664 00:15:33 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:02.664 00:15:33 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 10 1 0x1 00:08:02.664 00:15:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=10 00:08:02.664 00:15:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:02.664 00:15:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:02.664 00:15:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:08:02.664 00:15:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_10.conf 00:08:02.664 00:15:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:02.664 00:15:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:02.664 00:15:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 10 00:08:02.664 00:15:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4410 00:08:02.664 00:15:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:08:02.664 00:15:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' 00:08:02.664 00:15:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4410"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:02.664 00:15:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:02.664 00:15:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:02.664 00:15:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' -c /tmp/fuzz_json_10.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 -Z 10 00:08:02.664 [2024-10-09 00:15:33.139893] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:08:02.664 [2024-10-09 00:15:33.139960] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3888925 ] 00:08:02.922 [2024-10-09 00:15:33.327065] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.922 [2024-10-09 00:15:33.400138] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.922 [2024-10-09 00:15:33.459249] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:02.922 [2024-10-09 00:15:33.475496] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4410 *** 00:08:02.922 INFO: Running with entropic power schedule (0xFF, 100). 00:08:02.922 INFO: Seed: 160198096 00:08:02.922 INFO: Loaded 1 modules (384346 inline 8-bit counters): 384346 [0x2bec84c, 0x2c4a5a6), 00:08:02.922 INFO: Loaded 1 PC tables (384346 PCs): 384346 [0x2c4a5a8,0x3227b48), 00:08:02.922 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:08:02.922 INFO: A corpus is not provided, starting from an empty corpus 00:08:02.922 #2 INITED exec/s: 0 rss: 66Mb 00:08:02.922 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:02.922 This may also happen if the target rejected all inputs we tried so far 00:08:02.922 [2024-10-09 00:15:33.520983] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:26262626 cdw11:26262626 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.922 [2024-10-09 00:15:33.521012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:02.922 [2024-10-09 00:15:33.521069] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:26262626 cdw11:26262626 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.922 [2024-10-09 00:15:33.521083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:03.439 NEW_FUNC[1/714]: 0x448a88 in fuzz_admin_security_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:205 00:08:03.439 NEW_FUNC[2/714]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:03.439 #12 NEW cov: 12191 ft: 12190 corp: 2/24b lim: 40 exec/s: 0 rss: 73Mb L: 23/23 MS: 5 CopyPart-CopyPart-CrossOver-CMP-InsertRepeatedBytes- DE: "\000\000\000\""- 00:08:03.439 [2024-10-09 00:15:33.862090] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:26262626 cdw11:26262626 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.439 [2024-10-09 00:15:33.862125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.439 [2024-10-09 00:15:33.862186] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:26262626 cdw11:26262626 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.439 [2024-10-09 00:15:33.862200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:03.439 [2024-10-09 00:15:33.862259] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:0a0a0a00 cdw11:00002200 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.439 [2024-10-09 00:15:33.862273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:03.439 #13 NEW cov: 12304 ft: 12964 corp: 3/51b lim: 40 exec/s: 0 rss: 74Mb L: 27/27 MS: 1 PersAutoDict- DE: "\000\000\000\""- 00:08:03.439 [2024-10-09 00:15:33.921890] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:14141414 cdw11:14141414 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.439 [2024-10-09 00:15:33.921918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.439 #18 NEW cov: 12310 ft: 13434 corp: 4/62b lim: 40 exec/s: 0 rss: 74Mb L: 11/27 MS: 5 InsertByte-EraseBytes-CopyPart-CrossOver-InsertRepeatedBytes- 00:08:03.439 [2024-10-09 00:15:33.961953] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:06009414 cdw11:26000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.439 [2024-10-09 00:15:33.961979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.439 #22 NEW cov: 12395 ft: 13724 corp: 5/72b lim: 40 exec/s: 0 rss: 74Mb L: 10/27 MS: 4 EraseBytes-ChangeBit-ChangeBinInt-PersAutoDict- DE: "\000\000\000\""- 00:08:03.439 [2024-10-09 00:15:34.022222] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:26262626 cdw11:26262626 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.439 [2024-10-09 00:15:34.022247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.439 [2024-10-09 00:15:34.022308] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:26262626 cdw11:262626db SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.439 [2024-10-09 00:15:34.022323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:03.439 #23 NEW cov: 12395 ft: 13916 corp: 6/95b lim: 40 exec/s: 0 rss: 74Mb L: 23/27 MS: 1 ChangeBinInt- 00:08:03.439 [2024-10-09 00:15:34.062497] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:26262626 cdw11:26262626 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.439 [2024-10-09 00:15:34.062523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.439 [2024-10-09 00:15:34.062584] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:262626da cdw11:26262626 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.439 [2024-10-09 00:15:34.062598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:03.439 [2024-10-09 00:15:34.062661] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:0a0a0a00 cdw11:00002200 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.439 [2024-10-09 00:15:34.062675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:03.698 #24 NEW cov: 12395 ft: 14029 corp: 7/122b lim: 40 exec/s: 0 rss: 74Mb L: 27/27 MS: 1 ChangeBinInt- 00:08:03.698 [2024-10-09 00:15:34.122738] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:14141414 cdw11:26262626 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.698 [2024-10-09 00:15:34.122765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.699 [2024-10-09 00:15:34.122847] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:26262626 cdw11:26261414 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.699 [2024-10-09 00:15:34.122873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:03.699 [2024-10-09 00:15:34.122934] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:141426da cdw11:26262626 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.699 [2024-10-09 00:15:34.122948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:03.699 [2024-10-09 00:15:34.123023] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:14260a0a cdw11:0a00009c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.699 [2024-10-09 00:15:34.123037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:03.699 #25 NEW cov: 12395 ft: 14547 corp: 8/156b lim: 40 exec/s: 0 rss: 74Mb L: 34/34 MS: 1 CrossOver- 00:08:03.699 [2024-10-09 00:15:34.162536] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:14141414 cdw11:14141426 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.699 [2024-10-09 00:15:34.162561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.699 #31 NEW cov: 12395 ft: 14638 corp: 9/165b lim: 40 exec/s: 0 rss: 74Mb L: 9/34 MS: 1 EraseBytes- 00:08:03.699 [2024-10-09 00:15:34.203016] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:26262626 cdw11:26262626 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.699 [2024-10-09 00:15:34.203046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.699 [2024-10-09 00:15:34.203107] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:262626da cdw11:26262626 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.699 [2024-10-09 00:15:34.203121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:03.699 [2024-10-09 00:15:34.203181] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:0a0a0a00 cdw11:00002200 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.699 [2024-10-09 00:15:34.203194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:03.699 [2024-10-09 00:15:34.203252] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:0127189c cdw11:3cee2250 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.699 [2024-10-09 00:15:34.203266] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:03.699 #32 NEW cov: 12395 ft: 14649 corp: 10/200b lim: 40 exec/s: 0 rss: 74Mb L: 35/35 MS: 1 CMP- DE: "\001'\030\234<\356\"P"- 00:08:03.699 [2024-10-09 00:15:34.263054] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:26262626 cdw11:26262626 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.699 [2024-10-09 00:15:34.263079] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.699 [2024-10-09 00:15:34.263143] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:26262626 cdw11:2626ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.699 [2024-10-09 00:15:34.263157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:03.699 [2024-10-09 00:15:34.263217] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff26db SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.699 [2024-10-09 00:15:34.263231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:03.699 #33 NEW cov: 12395 ft: 14736 corp: 11/231b lim: 40 exec/s: 0 rss: 74Mb L: 31/35 MS: 1 CMP- DE: "\377\377\377\377\377\377\377\377"- 00:08:03.699 [2024-10-09 00:15:34.323328] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:14141414 cdw11:26262626 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.699 [2024-10-09 00:15:34.323353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.699 [2024-10-09 00:15:34.323413] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:26262626 cdw11:26261414 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.699 [2024-10-09 00:15:34.323427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:03.699 [2024-10-09 00:15:34.323484] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:141426da cdw11:26262626 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.699 [2024-10-09 00:15:34.323497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:03.699 [2024-10-09 00:15:34.323555] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ecd9f5f8 cdw11:0a00009c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.699 [2024-10-09 00:15:34.323568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:03.958 #34 NEW cov: 12395 ft: 14795 corp: 12/265b lim: 40 exec/s: 0 rss: 74Mb L: 34/35 MS: 1 ChangeBinInt- 00:08:03.958 [2024-10-09 00:15:34.383472] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:26262626 cdw11:26262626 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.958 [2024-10-09 00:15:34.383499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.958 [2024-10-09 00:15:34.383576] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:262626da cdw11:26262626 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.958 [2024-10-09 00:15:34.383590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:03.958 [2024-10-09 00:15:34.383650] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:0a0a0a00 cdw11:00002200 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.958 [2024-10-09 00:15:34.383663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:03.958 [2024-10-09 00:15:34.383721] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:0127189c cdw11:26ee2250 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.958 [2024-10-09 00:15:34.383734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:03.958 NEW_FUNC[1/1]: 0x1bfc8d8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:658 00:08:03.958 #35 NEW cov: 12418 ft: 14839 corp: 13/300b lim: 40 exec/s: 0 rss: 74Mb L: 35/35 MS: 1 CrossOver- 00:08:03.958 [2024-10-09 00:15:34.443652] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:19141414 cdw11:26262626 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.958 [2024-10-09 00:15:34.443677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.958 [2024-10-09 00:15:34.443737] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:26262626 cdw11:26261414 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.958 [2024-10-09 00:15:34.443751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:03.958 [2024-10-09 00:15:34.443811] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:141426da cdw11:26262626 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.958 [2024-10-09 00:15:34.443828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:03.958 [2024-10-09 00:15:34.443893] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:14260a0a cdw11:0a00009c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.958 [2024-10-09 00:15:34.443907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:03.958 #36 NEW cov: 12418 ft: 14877 corp: 14/334b lim: 40 exec/s: 0 rss: 74Mb L: 34/35 MS: 1 ChangeBinInt- 00:08:03.958 [2024-10-09 00:15:34.483377] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:06009414 cdw11:26000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.958 [2024-10-09 00:15:34.483401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.958 #37 NEW cov: 12418 ft: 14907 corp: 15/345b lim: 40 exec/s: 37 rss: 74Mb L: 11/35 MS: 1 InsertByte- 00:08:03.958 [2024-10-09 00:15:34.543944] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:14141414 cdw11:14142626 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.958 [2024-10-09 00:15:34.543968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.958 [2024-10-09 00:15:34.544026] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:26262626 cdw11:26262626 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.958 [2024-10-09 00:15:34.544040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:03.958 [2024-10-09 00:15:34.544102] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:14141414 cdw11:26da2626 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.958 [2024-10-09 00:15:34.544116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:03.958 [2024-10-09 00:15:34.544175] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:26261426 cdw11:0a0a0a00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.958 [2024-10-09 00:15:34.544189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:03.958 #38 NEW cov: 12418 ft: 14939 corp: 16/381b lim: 40 exec/s: 38 rss: 74Mb L: 36/36 MS: 1 CopyPart- 00:08:03.958 [2024-10-09 00:15:34.584060] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:26262626 cdw11:26262626 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.958 [2024-10-09 00:15:34.584084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.958 [2024-10-09 00:15:34.584144] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:26262626 cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.958 [2024-10-09 00:15:34.584158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:03.958 [2024-10-09 00:15:34.584219] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff2626 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.958 [2024-10-09 00:15:34.584232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:03.958 [2024-10-09 00:15:34.584292] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:26260a0a cdw11:0a000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.958 [2024-10-09 00:15:34.584305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:04.216 #39 NEW cov: 12418 ft: 14950 corp: 17/414b lim: 40 exec/s: 39 rss: 74Mb L: 33/36 MS: 1 InsertRepeatedBytes- 00:08:04.216 [2024-10-09 00:15:34.623968] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a4b4b4b cdw11:4b4b4b4b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:04.216 [2024-10-09 00:15:34.623993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:04.216 [2024-10-09 00:15:34.624054] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:4b4b4b4b cdw11:4b4b4b4b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:04.216 [2024-10-09 00:15:34.624068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:04.216 #40 NEW cov: 12418 ft: 14971 corp: 18/431b lim: 40 exec/s: 40 rss: 74Mb L: 17/36 MS: 1 InsertRepeatedBytes- 00:08:04.216 [2024-10-09 00:15:34.664155] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:26262626 cdw11:26262626 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:04.216 [2024-10-09 00:15:34.664179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:04.216 [2024-10-09 00:15:34.664240] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:262626da cdw11:26262626 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:04.216 [2024-10-09 00:15:34.664253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:04.216 [2024-10-09 00:15:34.664312] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:0a0a0a00 cdw11:27189c26 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:04.216 [2024-10-09 00:15:34.664330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:04.216 #41 NEW cov: 12418 ft: 15040 corp: 19/461b lim: 40 exec/s: 41 rss: 74Mb L: 30/36 MS: 1 EraseBytes- 00:08:04.216 [2024-10-09 00:15:34.724046] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:26262626 cdw11:26262626 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:04.216 [2024-10-09 00:15:34.724071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:04.216 #42 NEW cov: 12418 ft: 15086 corp: 20/476b lim: 40 exec/s: 42 rss: 74Mb L: 15/36 MS: 1 CrossOver- 00:08:04.216 [2024-10-09 00:15:34.764292] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:14142626 cdw11:26262626 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:04.216 [2024-10-09 00:15:34.764317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:04.216 [2024-10-09 00:15:34.764375] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:26262626 cdw11:26262614 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:04.216 [2024-10-09 00:15:34.764388] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:04.216 #44 NEW cov: 12418 ft: 15103 corp: 21/494b lim: 40 exec/s: 44 rss: 74Mb L: 18/36 MS: 2 EraseBytes-CrossOver- 00:08:04.216 [2024-10-09 00:15:34.824735] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:26262626 cdw11:26262600 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:04.216 [2024-10-09 00:15:34.824760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:04.216 [2024-10-09 00:15:34.824825] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000026 cdw11:26262626 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:04.216 [2024-10-09 00:15:34.824840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:04.216 [2024-10-09 00:15:34.824903] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:2626ffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:04.216 [2024-10-09 00:15:34.824916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:04.216 [2024-10-09 00:15:34.824974] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffff26db cdw11:f50a0a00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:04.216 [2024-10-09 00:15:34.824987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:04.474 #45 NEW cov: 12418 ft: 15145 corp: 22/529b lim: 40 exec/s: 45 rss: 75Mb L: 35/36 MS: 1 InsertRepeatedBytes- 00:08:04.474 [2024-10-09 00:15:34.884763] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:26262626 cdw11:26261414 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:04.474 [2024-10-09 00:15:34.884788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:04.474 [2024-10-09 00:15:34.884853] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:141426da cdw11:26262626 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:04.474 [2024-10-09 00:15:34.884879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:04.474 [2024-10-09 00:15:34.884937] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:14260a0a cdw11:0a00009c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:04.474 [2024-10-09 00:15:34.884950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:04.474 #46 NEW cov: 12418 ft: 15160 corp: 23/555b lim: 40 exec/s: 46 rss: 75Mb L: 26/36 MS: 1 EraseBytes- 00:08:04.474 [2024-10-09 00:15:34.924749] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:26262626 cdw11:26262626 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:04.474 [2024-10-09 00:15:34.924774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:04.474 [2024-10-09 00:15:34.924837] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:262626db cdw11:f50a0a00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:04.474 [2024-10-09 00:15:34.924867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:04.474 #47 NEW cov: 12418 ft: 15172 corp: 24/574b lim: 40 exec/s: 47 rss: 75Mb L: 19/36 MS: 1 EraseBytes- 00:08:04.474 [2024-10-09 00:15:34.964915] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:26262626 cdw11:26262626 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:04.474 [2024-10-09 00:15:34.964939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:04.474 [2024-10-09 00:15:34.965000] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:26262626 cdw11:2626262e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:04.474 [2024-10-09 00:15:34.965014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:04.474 #48 NEW cov: 12418 ft: 15186 corp: 25/597b lim: 40 exec/s: 48 rss: 75Mb L: 23/36 MS: 1 ChangeBit- 00:08:04.474 [2024-10-09 00:15:35.004870] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:14141414 cdw11:14141414 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:04.474 [2024-10-09 00:15:35.004894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:04.474 #49 NEW cov: 12418 ft: 15193 corp: 26/609b lim: 40 exec/s: 49 rss: 75Mb L: 12/36 MS: 1 InsertByte- 00:08:04.474 [2024-10-09 00:15:35.044973] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:26262626 cdw11:26262626 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:04.474 [2024-10-09 00:15:35.044998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:04.474 #50 NEW cov: 12418 ft: 15215 corp: 27/624b lim: 40 exec/s: 50 rss: 75Mb L: 15/36 MS: 1 ChangeByte- 00:08:04.474 [2024-10-09 00:15:35.105464] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:14141414 cdw11:1414148c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:04.474 [2024-10-09 00:15:35.105491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:04.474 [2024-10-09 00:15:35.105553] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:8c8c8c8c cdw11:8c8c8c8c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:04.474 [2024-10-09 00:15:35.105567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:04.474 [2024-10-09 00:15:35.105626] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:8c8c8c8c cdw11:8c8c8c8c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:04.474 [2024-10-09 00:15:35.105639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:04.733 #51 NEW cov: 12418 ft: 15235 corp: 28/654b lim: 40 exec/s: 51 rss: 75Mb L: 30/36 MS: 1 InsertRepeatedBytes- 00:08:04.733 [2024-10-09 00:15:35.145385] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:2626ffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:04.733 [2024-10-09 00:15:35.145410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:04.733 [2024-10-09 00:15:35.145473] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffff26db cdw11:f50a0a00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:04.733 [2024-10-09 00:15:35.145486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:04.733 #52 NEW cov: 12418 ft: 15308 corp: 29/673b lim: 40 exec/s: 52 rss: 75Mb L: 19/36 MS: 1 EraseBytes- 00:08:04.733 [2024-10-09 00:15:35.185390] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:06009414 cdw11:0a000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:04.733 [2024-10-09 00:15:35.185417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:04.733 #53 NEW cov: 12418 ft: 15324 corp: 30/683b lim: 40 exec/s: 53 rss: 75Mb L: 10/36 MS: 1 ChangeBinInt- 00:08:04.733 [2024-10-09 00:15:35.225621] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:26262626 cdw11:26262625 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:04.733 [2024-10-09 00:15:35.225647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:04.733 [2024-10-09 00:15:35.225708] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:26262626 cdw11:262626db SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:04.733 [2024-10-09 00:15:35.225722] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:04.733 #54 NEW cov: 12418 ft: 15337 corp: 31/706b lim: 40 exec/s: 54 rss: 75Mb L: 23/36 MS: 1 ChangeByte- 00:08:04.733 [2024-10-09 00:15:35.265978] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:26262626 cdw11:26262626 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:04.733 [2024-10-09 00:15:35.266004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:04.733 [2024-10-09 00:15:35.266061] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:26262626 cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:04.733 [2024-10-09 00:15:35.266075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:04.733 [2024-10-09 00:15:35.266132] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff2626 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:04.733 [2024-10-09 00:15:35.266146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:04.733 [2024-10-09 00:15:35.266206] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:26260a0a cdw11:0a000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:04.733 [2024-10-09 00:15:35.266219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:04.733 #55 NEW cov: 12418 ft: 15396 corp: 32/739b lim: 40 exec/s: 55 rss: 75Mb L: 33/36 MS: 1 PersAutoDict- DE: "\377\377\377\377\377\377\377\377"- 00:08:04.733 [2024-10-09 00:15:35.325738] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:26262626 cdw11:26262626 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:04.733 [2024-10-09 00:15:35.325763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:04.733 #56 NEW cov: 12418 ft: 15400 corp: 33/754b lim: 40 exec/s: 56 rss: 75Mb L: 15/36 MS: 1 CopyPart- 00:08:04.733 [2024-10-09 00:15:35.366101] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:14141414 cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:04.733 [2024-10-09 00:15:35.366127] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:04.733 [2024-10-09 00:15:35.366186] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:04.733 [2024-10-09 00:15:35.366203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:04.992 #57 NEW cov: 12421 ft: 15508 corp: 34/777b lim: 40 exec/s: 57 rss: 75Mb L: 23/36 MS: 1 InsertRepeatedBytes- 00:08:04.992 [2024-10-09 00:15:35.405993] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:14141414 cdw11:14141410 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:04.992 [2024-10-09 00:15:35.406018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:04.992 #58 NEW cov: 12421 ft: 15528 corp: 35/788b lim: 40 exec/s: 58 rss: 75Mb L: 11/36 MS: 1 ChangeBit- 00:08:04.992 [2024-10-09 00:15:35.446227] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:2626ffff cdw11:ffffff0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:04.992 [2024-10-09 00:15:35.446252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:04.992 [2024-10-09 00:15:35.446312] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:26ffffff cdw11:f5db0a00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:04.992 [2024-10-09 00:15:35.446326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:04.992 #59 NEW cov: 12421 ft: 15551 corp: 36/807b lim: 40 exec/s: 59 rss: 75Mb L: 19/36 MS: 1 ShuffleBytes- 00:08:04.992 [2024-10-09 00:15:35.506688] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:26262626 cdw11:26262626 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:04.992 [2024-10-09 00:15:35.506712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:04.992 [2024-10-09 00:15:35.506772] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:26262626 cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:04.992 [2024-10-09 00:15:35.506787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:04.992 [2024-10-09 00:15:35.506851] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:04.992 [2024-10-09 00:15:35.506865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:04.992 [2024-10-09 00:15:35.506923] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffff26 cdw11:2626260a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:04.992 [2024-10-09 00:15:35.506937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:04.992 #60 NEW cov: 12421 ft: 15589 corp: 37/845b lim: 40 exec/s: 30 rss: 75Mb L: 38/38 MS: 1 InsertRepeatedBytes- 00:08:04.992 #60 DONE cov: 12421 ft: 15589 corp: 37/845b lim: 40 exec/s: 30 rss: 75Mb 00:08:04.992 ###### Recommended dictionary. ###### 00:08:04.992 "\000\000\000\"" # Uses: 2 00:08:04.992 "\001'\030\234<\356\"P" # Uses: 0 00:08:04.992 "\377\377\377\377\377\377\377\377" # Uses: 1 00:08:04.992 ###### End of recommended dictionary. ###### 00:08:04.992 Done 60 runs in 2 second(s) 00:08:05.250 00:15:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_10.conf /var/tmp/suppress_nvmf_fuzz 00:08:05.250 00:15:35 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:05.250 00:15:35 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:05.250 00:15:35 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 11 1 0x1 00:08:05.250 00:15:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=11 00:08:05.250 00:15:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:05.250 00:15:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:05.250 00:15:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:08:05.250 00:15:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_11.conf 00:08:05.250 00:15:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:05.250 00:15:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:05.250 00:15:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 11 00:08:05.250 00:15:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4411 00:08:05.250 00:15:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:08:05.250 00:15:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' 00:08:05.250 00:15:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4411"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:05.250 00:15:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:05.250 00:15:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:05.250 00:15:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' -c /tmp/fuzz_json_11.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 -Z 11 00:08:05.250 [2024-10-09 00:15:35.723933] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:08:05.250 [2024-10-09 00:15:35.724001] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3889286 ] 00:08:05.507 [2024-10-09 00:15:35.919015] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.507 [2024-10-09 00:15:35.991899] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.507 [2024-10-09 00:15:36.051029] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:05.507 [2024-10-09 00:15:36.067268] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4411 *** 00:08:05.507 INFO: Running with entropic power schedule (0xFF, 100). 00:08:05.507 INFO: Seed: 2751175020 00:08:05.507 INFO: Loaded 1 modules (384346 inline 8-bit counters): 384346 [0x2bec84c, 0x2c4a5a6), 00:08:05.507 INFO: Loaded 1 PC tables (384346 PCs): 384346 [0x2c4a5a8,0x3227b48), 00:08:05.507 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:08:05.507 INFO: A corpus is not provided, starting from an empty corpus 00:08:05.507 #2 INITED exec/s: 0 rss: 66Mb 00:08:05.507 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:05.507 This may also happen if the target rejected all inputs we tried so far 00:08:05.507 [2024-10-09 00:15:36.116441] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.507 [2024-10-09 00:15:36.116469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:06.023 NEW_FUNC[1/715]: 0x44a7f8 in fuzz_admin_security_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:223 00:08:06.023 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:06.023 #3 NEW cov: 12202 ft: 12199 corp: 2/11b lim: 40 exec/s: 0 rss: 73Mb L: 10/10 MS: 1 InsertRepeatedBytes- 00:08:06.023 [2024-10-09 00:15:36.437401] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a565656 cdw11:56565656 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.023 [2024-10-09 00:15:36.437455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:06.023 [2024-10-09 00:15:36.437526] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:56565656 cdw11:56565656 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.023 [2024-10-09 00:15:36.437546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:06.023 #4 NEW cov: 12316 ft: 13481 corp: 3/31b lim: 40 exec/s: 0 rss: 73Mb L: 20/20 MS: 1 InsertRepeatedBytes- 00:08:06.023 [2024-10-09 00:15:36.487375] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a565656 cdw11:56565656 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.023 [2024-10-09 00:15:36.487403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:06.023 [2024-10-09 00:15:36.487460] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:0a565656 cdw11:56565656 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.023 [2024-10-09 00:15:36.487474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:06.023 #5 NEW cov: 12322 ft: 13762 corp: 4/51b lim: 40 exec/s: 0 rss: 74Mb L: 20/20 MS: 1 CopyPart- 00:08:06.023 [2024-10-09 00:15:36.547512] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a565657 cdw11:56565656 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.023 [2024-10-09 00:15:36.547538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:06.023 [2024-10-09 00:15:36.547595] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:0a565656 cdw11:56565656 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.023 [2024-10-09 00:15:36.547609] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:06.023 #6 NEW cov: 12407 ft: 14059 corp: 5/71b lim: 40 exec/s: 0 rss: 74Mb L: 20/20 MS: 1 ChangeBit- 00:08:06.023 [2024-10-09 00:15:36.607792] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:0000000d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.023 [2024-10-09 00:15:36.607835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:06.023 [2024-10-09 00:15:36.607894] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:0d0d0d0d cdw11:0d0d0d0d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.023 [2024-10-09 00:15:36.607908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:06.023 [2024-10-09 00:15:36.607967] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:0d0d0d0d cdw11:0d000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.023 [2024-10-09 00:15:36.607981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:06.023 #7 NEW cov: 12407 ft: 14351 corp: 6/95b lim: 40 exec/s: 0 rss: 74Mb L: 24/24 MS: 1 InsertRepeatedBytes- 00:08:06.281 [2024-10-09 00:15:36.667845] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a565656 cdw11:afa9a9a9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.281 [2024-10-09 00:15:36.667871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:06.281 [2024-10-09 00:15:36.667927] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:f5a9a9a9 cdw11:56565656 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.281 [2024-10-09 00:15:36.667940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:06.281 #8 NEW cov: 12407 ft: 14457 corp: 7/115b lim: 40 exec/s: 0 rss: 74Mb L: 20/24 MS: 1 ChangeBinInt- 00:08:06.281 [2024-10-09 00:15:36.707947] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a565656 cdw11:56567a56 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.282 [2024-10-09 00:15:36.707973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:06.282 [2024-10-09 00:15:36.708033] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:56565656 cdw11:56565656 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.282 [2024-10-09 00:15:36.708047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:06.282 #9 NEW cov: 12407 ft: 14535 corp: 8/135b lim: 40 exec/s: 0 rss: 74Mb L: 20/24 MS: 1 ChangeByte- 00:08:06.282 [2024-10-09 00:15:36.747925] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:02afe3e3 cdw11:e3e3e3e3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.282 [2024-10-09 00:15:36.747952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:06.282 #13 NEW cov: 12407 ft: 14596 corp: 9/143b lim: 40 exec/s: 0 rss: 74Mb L: 8/24 MS: 4 InsertByte-ChangeBinInt-ChangeBit-InsertRepeatedBytes- 00:08:06.282 [2024-10-09 00:15:36.788218] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a565656 cdw11:56565656 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.282 [2024-10-09 00:15:36.788243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:06.282 [2024-10-09 00:15:36.788298] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:5656565e cdw11:56565656 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.282 [2024-10-09 00:15:36.788312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:06.282 #14 NEW cov: 12407 ft: 14698 corp: 10/163b lim: 40 exec/s: 0 rss: 74Mb L: 20/24 MS: 1 ChangeBit- 00:08:06.282 [2024-10-09 00:15:36.828270] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a565657 cdw11:56565656 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.282 [2024-10-09 00:15:36.828296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:06.282 [2024-10-09 00:15:36.828355] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:0a5656f3 cdw11:56565656 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.282 [2024-10-09 00:15:36.828369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:06.282 #15 NEW cov: 12407 ft: 14747 corp: 11/183b lim: 40 exec/s: 0 rss: 74Mb L: 20/24 MS: 1 ChangeByte- 00:08:06.282 [2024-10-09 00:15:36.888510] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a56563d cdw11:56565656 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.282 [2024-10-09 00:15:36.888537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:06.282 [2024-10-09 00:15:36.888594] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:0a565656 cdw11:56565656 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.282 [2024-10-09 00:15:36.888608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:06.282 #16 NEW cov: 12407 ft: 14805 corp: 12/203b lim: 40 exec/s: 0 rss: 74Mb L: 20/24 MS: 1 ChangeByte- 00:08:06.540 [2024-10-09 00:15:36.928617] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a565657 cdw11:56565656 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.540 [2024-10-09 00:15:36.928644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:06.540 [2024-10-09 00:15:36.928704] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:0a5656f3 cdw11:56565656 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.540 [2024-10-09 00:15:36.928723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:06.540 #17 NEW cov: 12407 ft: 14813 corp: 13/224b lim: 40 exec/s: 0 rss: 74Mb L: 21/24 MS: 1 InsertByte- 00:08:06.540 [2024-10-09 00:15:36.988605] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:0000000d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.540 [2024-10-09 00:15:36.988630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:06.540 NEW_FUNC[1/1]: 0x1bfc8d8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:658 00:08:06.540 #18 NEW cov: 12430 ft: 14890 corp: 14/238b lim: 40 exec/s: 0 rss: 74Mb L: 14/24 MS: 1 EraseBytes- 00:08:06.540 [2024-10-09 00:15:37.048905] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a565656 cdw11:afa9a9a9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.540 [2024-10-09 00:15:37.048930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:06.540 [2024-10-09 00:15:37.048988] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:f5a9a9a9 cdw11:56565656 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.540 [2024-10-09 00:15:37.049002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:06.540 #19 NEW cov: 12430 ft: 14918 corp: 15/258b lim: 40 exec/s: 0 rss: 74Mb L: 20/24 MS: 1 ShuffleBytes- 00:08:06.540 [2024-10-09 00:15:37.109267] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a565657 cdw11:56565656 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.540 [2024-10-09 00:15:37.109292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:06.540 [2024-10-09 00:15:37.109351] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:0a560000 cdw11:000d0d0d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.540 [2024-10-09 00:15:37.109364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:06.540 [2024-10-09 00:15:37.109418] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:0d0d0000 cdw11:56f35656 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.540 [2024-10-09 00:15:37.109431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:06.540 #20 NEW cov: 12430 ft: 14940 corp: 16/289b lim: 40 exec/s: 20 rss: 74Mb L: 31/31 MS: 1 CrossOver- 00:08:06.540 [2024-10-09 00:15:37.169148] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a565656 cdw11:afa9a956 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.540 [2024-10-09 00:15:37.169172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:06.798 #21 NEW cov: 12430 ft: 14973 corp: 17/304b lim: 40 exec/s: 21 rss: 74Mb L: 15/31 MS: 1 EraseBytes- 00:08:06.798 [2024-10-09 00:15:37.209382] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000014 cdw11:56565656 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.798 [2024-10-09 00:15:37.209406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:06.798 [2024-10-09 00:15:37.209463] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:0a5656f3 cdw11:56565656 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.799 [2024-10-09 00:15:37.209477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:06.799 #22 NEW cov: 12430 ft: 14995 corp: 18/324b lim: 40 exec/s: 22 rss: 74Mb L: 20/31 MS: 1 ChangeBinInt- 00:08:06.799 [2024-10-09 00:15:37.249473] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a565657 cdw11:46565656 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.799 [2024-10-09 00:15:37.249500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:06.799 [2024-10-09 00:15:37.249558] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:0a565656 cdw11:56565656 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.799 [2024-10-09 00:15:37.249572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:06.799 #23 NEW cov: 12430 ft: 15005 corp: 19/344b lim: 40 exec/s: 23 rss: 74Mb L: 20/31 MS: 1 ChangeBit- 00:08:06.799 [2024-10-09 00:15:37.289935] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a565657 cdw11:0d0d0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.799 [2024-10-09 00:15:37.289961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:06.799 [2024-10-09 00:15:37.290020] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:56f35656 cdw11:56560a56 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.799 [2024-10-09 00:15:37.290034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:06.799 [2024-10-09 00:15:37.290088] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:0000000d cdw11:0d0d0d0d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.799 [2024-10-09 00:15:37.290102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:06.799 [2024-10-09 00:15:37.290156] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:000056f3 cdw11:56565656 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.799 [2024-10-09 00:15:37.290170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:06.799 #24 NEW cov: 12430 ft: 15356 corp: 20/381b lim: 40 exec/s: 24 rss: 74Mb L: 37/37 MS: 1 CopyPart- 00:08:06.799 [2024-10-09 00:15:37.350127] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:0000000d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.799 [2024-10-09 00:15:37.350151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:06.799 [2024-10-09 00:15:37.350211] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:0d0d0d0d cdw11:0d0d0d0d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.799 [2024-10-09 00:15:37.350224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:06.799 [2024-10-09 00:15:37.350277] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:0d0d0d0d cdw11:0d000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.799 [2024-10-09 00:15:37.350291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:06.799 [2024-10-09 00:15:37.350349] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:0d0d0d0d cdw11:0d0d0d0d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.799 [2024-10-09 00:15:37.350363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:06.799 #25 NEW cov: 12430 ft: 15377 corp: 21/420b lim: 40 exec/s: 25 rss: 74Mb L: 39/39 MS: 1 CopyPart- 00:08:06.799 [2024-10-09 00:15:37.390010] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a565656 cdw11:afa9a9a9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.799 [2024-10-09 00:15:37.390034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:06.799 [2024-10-09 00:15:37.390097] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:f5a9a9a9 cdw11:5656a372 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.799 [2024-10-09 00:15:37.390112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:06.799 [2024-10-09 00:15:37.390167] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:97f19d18 cdw11:27005656 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.799 [2024-10-09 00:15:37.390181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:06.799 #26 NEW cov: 12430 ft: 15405 corp: 22/448b lim: 40 exec/s: 26 rss: 74Mb L: 28/39 MS: 1 CMP- DE: "\243r\227\361\235\030'\000"- 00:08:06.799 [2024-10-09 00:15:37.430330] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a565656 cdw11:56565656 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.799 [2024-10-09 00:15:37.430355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:06.799 [2024-10-09 00:15:37.430412] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:5656569b cdw11:9b9b9b9b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.799 [2024-10-09 00:15:37.430425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:06.799 [2024-10-09 00:15:37.430476] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:9b9b9b9b cdw11:9b9b9b9b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.799 [2024-10-09 00:15:37.430490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:06.799 [2024-10-09 00:15:37.430549] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:9b565656 cdw11:56565656 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.799 [2024-10-09 00:15:37.430563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:07.057 #27 NEW cov: 12430 ft: 15429 corp: 23/482b lim: 40 exec/s: 27 rss: 74Mb L: 34/39 MS: 1 InsertRepeatedBytes- 00:08:07.057 [2024-10-09 00:15:37.470110] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a565657 cdw11:46565656 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.057 [2024-10-09 00:15:37.470135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:07.057 [2024-10-09 00:15:37.470191] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:4a565656 cdw11:56565656 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.057 [2024-10-09 00:15:37.470204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:07.057 #28 NEW cov: 12430 ft: 15497 corp: 24/502b lim: 40 exec/s: 28 rss: 75Mb L: 20/39 MS: 1 ChangeBit- 00:08:07.057 [2024-10-09 00:15:37.530299] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a56563a cdw11:56565656 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.057 [2024-10-09 00:15:37.530324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:07.057 [2024-10-09 00:15:37.530381] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:0a565656 cdw11:56565656 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.057 [2024-10-09 00:15:37.530395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:07.057 #29 NEW cov: 12430 ft: 15514 corp: 25/522b lim: 40 exec/s: 29 rss: 75Mb L: 20/39 MS: 1 ChangeByte- 00:08:07.057 [2024-10-09 00:15:37.570378] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a565657 cdw11:56564f56 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.057 [2024-10-09 00:15:37.570403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:07.057 [2024-10-09 00:15:37.570463] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:0a5656f3 cdw11:56565656 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.058 [2024-10-09 00:15:37.570477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:07.058 #30 NEW cov: 12430 ft: 15531 corp: 26/542b lim: 40 exec/s: 30 rss: 75Mb L: 20/39 MS: 1 ChangeBinInt- 00:08:07.058 [2024-10-09 00:15:37.610796] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a565657 cdw11:565656af SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.058 [2024-10-09 00:15:37.610824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:07.058 [2024-10-09 00:15:37.610884] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:a9a95656 cdw11:56565656 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.058 [2024-10-09 00:15:37.610897] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:07.058 [2024-10-09 00:15:37.610956] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:5656560a cdw11:5656f356 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.058 [2024-10-09 00:15:37.610969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:07.058 [2024-10-09 00:15:37.611025] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:5656567a cdw11:56565656 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.058 [2024-10-09 00:15:37.611039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:07.058 #31 NEW cov: 12430 ft: 15536 corp: 27/574b lim: 40 exec/s: 31 rss: 75Mb L: 32/39 MS: 1 CrossOver- 00:08:07.058 [2024-10-09 00:15:37.650607] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a565656 cdw11:56565656 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.058 [2024-10-09 00:15:37.650631] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:07.058 [2024-10-09 00:15:37.650689] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:0a565656 cdw11:56565656 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.058 [2024-10-09 00:15:37.650703] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:07.058 #32 NEW cov: 12430 ft: 15537 corp: 28/595b lim: 40 exec/s: 32 rss: 75Mb L: 21/39 MS: 1 InsertByte- 00:08:07.058 [2024-10-09 00:15:37.690604] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:02afe3e3 cdw11:89e3e3e3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.058 [2024-10-09 00:15:37.690641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:07.314 #34 NEW cov: 12430 ft: 15587 corp: 29/603b lim: 40 exec/s: 34 rss: 75Mb L: 8/39 MS: 2 EraseBytes-InsertByte- 00:08:07.314 [2024-10-09 00:15:37.750919] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a565656 cdw11:afa9a956 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.314 [2024-10-09 00:15:37.750944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:07.314 [2024-10-09 00:15:37.751003] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:5656565d cdw11:56565656 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.314 [2024-10-09 00:15:37.751016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:07.314 #35 NEW cov: 12430 ft: 15606 corp: 30/619b lim: 40 exec/s: 35 rss: 75Mb L: 16/39 MS: 1 InsertByte- 00:08:07.314 [2024-10-09 00:15:37.811452] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a565657 cdw11:0d0d0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.314 [2024-10-09 00:15:37.811478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:07.314 [2024-10-09 00:15:37.811537] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:56f35656 cdw11:56560a56 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.315 [2024-10-09 00:15:37.811551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:07.315 [2024-10-09 00:15:37.811611] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:0000000d cdw11:0d0d0d0d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.315 [2024-10-09 00:15:37.811625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:07.315 [2024-10-09 00:15:37.811677] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00005656 cdw11:56f35656 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.315 [2024-10-09 00:15:37.811691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:07.315 #36 NEW cov: 12430 ft: 15641 corp: 31/658b lim: 40 exec/s: 36 rss: 75Mb L: 39/39 MS: 1 CopyPart- 00:08:07.315 [2024-10-09 00:15:37.871098] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a56563a cdw11:0a565656 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.315 [2024-10-09 00:15:37.871123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:07.315 #37 NEW cov: 12430 ft: 15659 corp: 32/666b lim: 40 exec/s: 37 rss: 75Mb L: 8/39 MS: 1 CrossOver- 00:08:07.315 [2024-10-09 00:15:37.931360] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a565614 cdw11:56564f56 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.315 [2024-10-09 00:15:37.931385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:07.315 [2024-10-09 00:15:37.931443] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:0a5656f3 cdw11:56565656 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.315 [2024-10-09 00:15:37.931457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:07.573 #38 NEW cov: 12430 ft: 15684 corp: 33/686b lim: 40 exec/s: 38 rss: 75Mb L: 20/39 MS: 1 ChangeBinInt- 00:08:07.573 [2024-10-09 00:15:37.991529] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a56563a cdw11:56565656 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.573 [2024-10-09 00:15:37.991555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:07.573 [2024-10-09 00:15:37.991613] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:0a565656 cdw11:a37297f1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.573 [2024-10-09 00:15:37.991627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:07.573 #39 NEW cov: 12430 ft: 15689 corp: 34/706b lim: 40 exec/s: 39 rss: 75Mb L: 20/39 MS: 1 PersAutoDict- DE: "\243r\227\361\235\030'\000"- 00:08:07.573 [2024-10-09 00:15:38.031970] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a565656 cdw11:afa9a9a9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.573 [2024-10-09 00:15:38.031995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:07.573 [2024-10-09 00:15:38.032055] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:11111111 cdw11:11111111 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.573 [2024-10-09 00:15:38.032073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:07.573 [2024-10-09 00:15:38.032132] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:f5a9a9a9 cdw11:5656a372 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.573 [2024-10-09 00:15:38.032146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:07.573 [2024-10-09 00:15:38.032199] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:97f19d18 cdw11:27005656 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.573 [2024-10-09 00:15:38.032213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:07.573 #40 NEW cov: 12430 ft: 15706 corp: 35/742b lim: 40 exec/s: 40 rss: 75Mb L: 36/39 MS: 1 InsertRepeatedBytes- 00:08:07.573 [2024-10-09 00:15:38.091638] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0223e3e3 cdw11:89e3e3e3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.573 [2024-10-09 00:15:38.091662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:07.573 #41 NEW cov: 12430 ft: 15741 corp: 36/750b lim: 40 exec/s: 20 rss: 75Mb L: 8/39 MS: 1 ChangeByte- 00:08:07.573 #41 DONE cov: 12430 ft: 15741 corp: 36/750b lim: 40 exec/s: 20 rss: 75Mb 00:08:07.573 ###### Recommended dictionary. ###### 00:08:07.573 "\243r\227\361\235\030'\000" # Uses: 1 00:08:07.573 ###### End of recommended dictionary. ###### 00:08:07.573 Done 41 runs in 2 second(s) 00:08:07.832 00:15:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_11.conf /var/tmp/suppress_nvmf_fuzz 00:08:07.832 00:15:38 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:07.832 00:15:38 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:07.832 00:15:38 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 12 1 0x1 00:08:07.832 00:15:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=12 00:08:07.832 00:15:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:07.832 00:15:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:07.832 00:15:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:08:07.832 00:15:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_12.conf 00:08:07.832 00:15:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:07.832 00:15:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:07.832 00:15:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 12 00:08:07.832 00:15:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4412 00:08:07.832 00:15:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:08:07.832 00:15:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' 00:08:07.832 00:15:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4412"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:07.832 00:15:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:07.832 00:15:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:07.832 00:15:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' -c /tmp/fuzz_json_12.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 -Z 12 00:08:07.832 [2024-10-09 00:15:38.306524] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:08:07.832 [2024-10-09 00:15:38.306592] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3889645 ] 00:08:08.090 [2024-10-09 00:15:38.507162] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.090 [2024-10-09 00:15:38.580490] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.091 [2024-10-09 00:15:38.639438] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:08.091 [2024-10-09 00:15:38.655688] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4412 *** 00:08:08.091 INFO: Running with entropic power schedule (0xFF, 100). 00:08:08.091 INFO: Seed: 1045207048 00:08:08.091 INFO: Loaded 1 modules (384346 inline 8-bit counters): 384346 [0x2bec84c, 0x2c4a5a6), 00:08:08.091 INFO: Loaded 1 PC tables (384346 PCs): 384346 [0x2c4a5a8,0x3227b48), 00:08:08.091 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:08:08.091 INFO: A corpus is not provided, starting from an empty corpus 00:08:08.091 #2 INITED exec/s: 0 rss: 67Mb 00:08:08.091 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:08.091 This may also happen if the target rejected all inputs we tried so far 00:08:08.091 [2024-10-09 00:15:38.711443] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.091 [2024-10-09 00:15:38.711472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.091 [2024-10-09 00:15:38.711531] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.091 [2024-10-09 00:15:38.711545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.091 [2024-10-09 00:15:38.711602] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.091 [2024-10-09 00:15:38.711616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:08.607 NEW_FUNC[1/715]: 0x44c568 in fuzz_admin_directive_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:241 00:08:08.607 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:08.607 #6 NEW cov: 12201 ft: 12200 corp: 2/28b lim: 40 exec/s: 0 rss: 74Mb L: 27/27 MS: 4 ShuffleBytes-ChangeByte-CopyPart-InsertRepeatedBytes- 00:08:08.607 [2024-10-09 00:15:39.052482] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a303030 cdw11:30303030 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.607 [2024-10-09 00:15:39.052541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.607 [2024-10-09 00:15:39.052625] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:30303030 cdw11:30303030 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.607 [2024-10-09 00:15:39.052651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.607 #7 NEW cov: 12314 ft: 13119 corp: 3/45b lim: 40 exec/s: 0 rss: 74Mb L: 17/27 MS: 1 InsertRepeatedBytes- 00:08:08.607 [2024-10-09 00:15:39.102226] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a303030 cdw11:30383030 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.607 [2024-10-09 00:15:39.102253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.607 [2024-10-09 00:15:39.102310] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:30303030 cdw11:30303030 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.607 [2024-10-09 00:15:39.102328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.607 #8 NEW cov: 12320 ft: 13392 corp: 4/62b lim: 40 exec/s: 0 rss: 74Mb L: 17/27 MS: 1 ChangeBit- 00:08:08.607 [2024-10-09 00:15:39.162837] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a303030 cdw11:30383030 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.607 [2024-10-09 00:15:39.162864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.607 [2024-10-09 00:15:39.162937] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:3030a5a5 cdw11:a5a5a5a5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.607 [2024-10-09 00:15:39.162951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.607 [2024-10-09 00:15:39.163008] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:a5a5a5a5 cdw11:a5a5a5a5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.607 [2024-10-09 00:15:39.163023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:08.607 [2024-10-09 00:15:39.163079] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:a5a5a5a5 cdw11:a5a5a5a5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.607 [2024-10-09 00:15:39.163096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:08.607 [2024-10-09 00:15:39.163152] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:a5303030 cdw11:30303030 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.607 [2024-10-09 00:15:39.163166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:08.607 #9 NEW cov: 12405 ft: 14009 corp: 5/102b lim: 40 exec/s: 0 rss: 74Mb L: 40/40 MS: 1 InsertRepeatedBytes- 00:08:08.607 [2024-10-09 00:15:39.222667] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:007a0000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.607 [2024-10-09 00:15:39.222694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.607 [2024-10-09 00:15:39.222753] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.607 [2024-10-09 00:15:39.222768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.607 [2024-10-09 00:15:39.222824] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.607 [2024-10-09 00:15:39.222838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:08.866 #10 NEW cov: 12405 ft: 14171 corp: 6/129b lim: 40 exec/s: 0 rss: 74Mb L: 27/40 MS: 1 ChangeByte- 00:08:08.866 [2024-10-09 00:15:39.282854] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:007a0080 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.866 [2024-10-09 00:15:39.282881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.866 [2024-10-09 00:15:39.282937] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.866 [2024-10-09 00:15:39.282951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.866 [2024-10-09 00:15:39.283008] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.866 [2024-10-09 00:15:39.283025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:08.866 #11 NEW cov: 12405 ft: 14213 corp: 7/156b lim: 40 exec/s: 0 rss: 74Mb L: 27/40 MS: 1 ChangeBit- 00:08:08.866 [2024-10-09 00:15:39.343182] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.866 [2024-10-09 00:15:39.343209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.866 [2024-10-09 00:15:39.343267] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:000000ff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.866 [2024-10-09 00:15:39.343281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.866 [2024-10-09 00:15:39.343336] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffff00 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.866 [2024-10-09 00:15:39.343349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:08.866 [2024-10-09 00:15:39.343406] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.867 [2024-10-09 00:15:39.343419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:08.867 #12 NEW cov: 12405 ft: 14326 corp: 8/195b lim: 40 exec/s: 0 rss: 74Mb L: 39/40 MS: 1 InsertRepeatedBytes- 00:08:08.867 [2024-10-09 00:15:39.383122] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a303030 cdw11:30383030 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.867 [2024-10-09 00:15:39.383148] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.867 [2024-10-09 00:15:39.383222] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:3030a5a5 cdw11:a5a5a5a5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.867 [2024-10-09 00:15:39.383237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.867 [2024-10-09 00:15:39.383294] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:a5303030 cdw11:30303030 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.867 [2024-10-09 00:15:39.383308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:08.867 #13 NEW cov: 12405 ft: 14354 corp: 9/219b lim: 40 exec/s: 0 rss: 74Mb L: 24/40 MS: 1 EraseBytes- 00:08:08.867 [2024-10-09 00:15:39.443598] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a303030 cdw11:30383030 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.867 [2024-10-09 00:15:39.443624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.867 [2024-10-09 00:15:39.443680] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:3030a5a5 cdw11:a5a5a5a5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.867 [2024-10-09 00:15:39.443694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.867 [2024-10-09 00:15:39.443748] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:a5a5a5a5 cdw11:a5a5a5a5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.867 [2024-10-09 00:15:39.443761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:08.867 [2024-10-09 00:15:39.443823] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:a5a5a5a5 cdw11:a5a5a5a5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.867 [2024-10-09 00:15:39.443839] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:08.867 [2024-10-09 00:15:39.443895] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:a5303030 cdw11:30303030 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.867 [2024-10-09 00:15:39.443909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:08.867 #14 NEW cov: 12405 ft: 14385 corp: 10/259b lim: 40 exec/s: 0 rss: 74Mb L: 40/40 MS: 1 CopyPart- 00:08:08.867 [2024-10-09 00:15:39.483686] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a303030 cdw11:30383030 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.867 [2024-10-09 00:15:39.483710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.867 [2024-10-09 00:15:39.483781] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:3030a5a5 cdw11:a5a5a5a5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.867 [2024-10-09 00:15:39.483796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.867 [2024-10-09 00:15:39.483851] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:a5a5a5a5 cdw11:a5a5a5a5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.867 [2024-10-09 00:15:39.483865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:08.867 [2024-10-09 00:15:39.483920] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:a5a5a5a5 cdw11:a5a5a5a5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.867 [2024-10-09 00:15:39.483934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:08.867 [2024-10-09 00:15:39.483989] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:a5303030 cdw11:30303030 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.867 [2024-10-09 00:15:39.484002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:09.126 #20 NEW cov: 12405 ft: 14470 corp: 11/299b lim: 40 exec/s: 0 rss: 74Mb L: 40/40 MS: 1 CopyPart- 00:08:09.126 [2024-10-09 00:15:39.523308] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a303030 cdw11:30303030 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:09.126 [2024-10-09 00:15:39.523334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:09.126 [2024-10-09 00:15:39.523389] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:30303030 cdw11:30303030 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:09.126 [2024-10-09 00:15:39.523403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:09.126 #21 NEW cov: 12405 ft: 14494 corp: 12/316b lim: 40 exec/s: 0 rss: 74Mb L: 17/40 MS: 1 ShuffleBytes- 00:08:09.126 [2024-10-09 00:15:39.563889] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a303030 cdw11:30383030 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:09.126 [2024-10-09 00:15:39.563913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:09.126 [2024-10-09 00:15:39.563984] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:3030a5a5 cdw11:a5a5a5a5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:09.126 [2024-10-09 00:15:39.563998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:09.126 [2024-10-09 00:15:39.564051] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:a5a5a5a5 cdw11:a5a5a5a5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:09.126 [2024-10-09 00:15:39.564067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:09.126 [2024-10-09 00:15:39.564118] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:a5a5a5a5 cdw11:a5a5a5a5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:09.126 [2024-10-09 00:15:39.564131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:09.126 [2024-10-09 00:15:39.564187] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:a5303030 cdw11:30303030 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:09.126 [2024-10-09 00:15:39.564200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:09.126 NEW_FUNC[1/1]: 0x1bfc8d8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:658 00:08:09.126 #22 NEW cov: 12428 ft: 14529 corp: 13/356b lim: 40 exec/s: 0 rss: 75Mb L: 40/40 MS: 1 ShuffleBytes- 00:08:09.126 [2024-10-09 00:15:39.623607] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a303030 cdw11:30383030 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:09.126 [2024-10-09 00:15:39.623632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:09.126 [2024-10-09 00:15:39.623689] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:3030a5a5 cdw11:a5a5a5a5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:09.126 [2024-10-09 00:15:39.623702] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:09.126 #23 NEW cov: 12428 ft: 14606 corp: 14/376b lim: 40 exec/s: 0 rss: 75Mb L: 20/40 MS: 1 CrossOver- 00:08:09.126 [2024-10-09 00:15:39.663709] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a303030 cdw11:30383030 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:09.126 [2024-10-09 00:15:39.663734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:09.126 [2024-10-09 00:15:39.663791] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:3030a5a5 cdw11:a59ba5a5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:09.126 [2024-10-09 00:15:39.663805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:09.126 #24 NEW cov: 12428 ft: 14631 corp: 15/396b lim: 40 exec/s: 24 rss: 75Mb L: 20/40 MS: 1 ChangeBinInt- 00:08:09.126 [2024-10-09 00:15:39.723887] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a303030 cdw11:3038a5a5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:09.126 [2024-10-09 00:15:39.723913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:09.126 [2024-10-09 00:15:39.723970] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:a5a5a5a5 cdw11:a5a5a5a5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:09.126 [2024-10-09 00:15:39.723984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:09.126 #25 NEW cov: 12428 ft: 14652 corp: 16/419b lim: 40 exec/s: 25 rss: 75Mb L: 23/40 MS: 1 EraseBytes- 00:08:09.385 [2024-10-09 00:15:39.764406] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:000000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:09.385 [2024-10-09 00:15:39.764431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:09.385 [2024-10-09 00:15:39.764505] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0000ff00 cdw11:00ffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:09.385 [2024-10-09 00:15:39.764519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:09.385 [2024-10-09 00:15:39.764579] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffff00 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:09.385 [2024-10-09 00:15:39.764592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:09.385 [2024-10-09 00:15:39.764647] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:09.385 [2024-10-09 00:15:39.764660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:09.385 #26 NEW cov: 12428 ft: 14667 corp: 17/458b lim: 40 exec/s: 26 rss: 75Mb L: 39/40 MS: 1 ShuffleBytes- 00:08:09.385 [2024-10-09 00:15:39.824651] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a303030 cdw11:30383030 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:09.385 [2024-10-09 00:15:39.824676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:09.385 [2024-10-09 00:15:39.824749] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:3030a5a5 cdw11:a5a5a530 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:09.385 [2024-10-09 00:15:39.824763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:09.385 [2024-10-09 00:15:39.824818] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:38303030 cdw11:30a5a5a5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:09.385 [2024-10-09 00:15:39.824832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:09.385 [2024-10-09 00:15:39.824897] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:a5a5a5a5 cdw11:a5a5a5a5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:09.385 [2024-10-09 00:15:39.824910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:09.385 [2024-10-09 00:15:39.824962] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:a5303030 cdw11:30303030 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:09.385 [2024-10-09 00:15:39.824975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:09.385 #27 NEW cov: 12428 ft: 14677 corp: 18/498b lim: 40 exec/s: 27 rss: 75Mb L: 40/40 MS: 1 CopyPart- 00:08:09.385 [2024-10-09 00:15:39.864274] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a303030 cdw11:b038a5a5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:09.385 [2024-10-09 00:15:39.864299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:09.385 [2024-10-09 00:15:39.864373] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:a5a5a5a5 cdw11:a5a5a5a5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:09.385 [2024-10-09 00:15:39.864387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:09.385 #28 NEW cov: 12428 ft: 14683 corp: 19/521b lim: 40 exec/s: 28 rss: 75Mb L: 23/40 MS: 1 ChangeBit- 00:08:09.385 [2024-10-09 00:15:39.924487] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a30a5b0 cdw11:3038a5a5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:09.385 [2024-10-09 00:15:39.924513] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:09.385 [2024-10-09 00:15:39.924586] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:30a5a5a5 cdw11:a5a5a5a5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:09.385 [2024-10-09 00:15:39.924599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:09.385 #29 NEW cov: 12428 ft: 14698 corp: 20/544b lim: 40 exec/s: 29 rss: 75Mb L: 23/40 MS: 1 ShuffleBytes- 00:08:09.385 [2024-10-09 00:15:39.985112] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a303030 cdw11:30383030 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:09.385 [2024-10-09 00:15:39.985139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:09.385 [2024-10-09 00:15:39.985196] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:3030a5a5 cdw11:a5a5a5a5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:09.385 [2024-10-09 00:15:39.985210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:09.385 [2024-10-09 00:15:39.985266] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:a5a5a5a5 cdw11:a5a5a5a5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:09.385 [2024-10-09 00:15:39.985280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:09.385 [2024-10-09 00:15:39.985335] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:a5a5a5a5 cdw11:a5a5a5a5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:09.385 [2024-10-09 00:15:39.985348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:09.385 [2024-10-09 00:15:39.985404] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:a5303030 cdw11:30303031 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:09.385 [2024-10-09 00:15:39.985418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:09.643 #30 NEW cov: 12428 ft: 14709 corp: 21/584b lim: 40 exec/s: 30 rss: 75Mb L: 40/40 MS: 1 ChangeASCIIInt- 00:08:09.643 [2024-10-09 00:15:40.045013] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:007a0000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:09.643 [2024-10-09 00:15:40.045046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:09.643 [2024-10-09 00:15:40.045105] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00002e00 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:09.643 [2024-10-09 00:15:40.045119] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:09.643 [2024-10-09 00:15:40.045173] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:09.643 [2024-10-09 00:15:40.045186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:09.643 #31 NEW cov: 12428 ft: 14753 corp: 22/611b lim: 40 exec/s: 31 rss: 75Mb L: 27/40 MS: 1 ChangeByte- 00:08:09.643 [2024-10-09 00:15:40.084945] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a303030 cdw11:3038a5a5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:09.643 [2024-10-09 00:15:40.084980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:09.643 [2024-10-09 00:15:40.085038] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:a5a5a5a5 cdw11:a5a5a5a5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:09.643 [2024-10-09 00:15:40.085052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:09.643 #32 NEW cov: 12428 ft: 14783 corp: 23/634b lim: 40 exec/s: 32 rss: 75Mb L: 23/40 MS: 1 ChangeASCIIInt- 00:08:09.643 [2024-10-09 00:15:40.125497] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a303030 cdw11:30383030 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:09.643 [2024-10-09 00:15:40.125530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:09.643 [2024-10-09 00:15:40.125590] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:3035a5a5 cdw11:a5a5a5a5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:09.643 [2024-10-09 00:15:40.125604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:09.643 [2024-10-09 00:15:40.125660] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:a5a5a5a5 cdw11:a5a5a5a5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:09.643 [2024-10-09 00:15:40.125674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:09.643 [2024-10-09 00:15:40.125731] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:a5a5a5a5 cdw11:a5a5a5a5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:09.643 [2024-10-09 00:15:40.125744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:09.643 [2024-10-09 00:15:40.125799] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:a5303030 cdw11:30303030 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:09.643 [2024-10-09 00:15:40.125817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:09.644 #33 NEW cov: 12428 ft: 14811 corp: 24/674b lim: 40 exec/s: 33 rss: 75Mb L: 40/40 MS: 1 ChangeASCIIInt- 00:08:09.644 [2024-10-09 00:15:40.165130] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a303030 cdw11:30a5a5a5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:09.644 [2024-10-09 00:15:40.165156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:09.644 [2024-10-09 00:15:40.165215] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:a5a5a5a5 cdw11:a5303030 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:09.644 [2024-10-09 00:15:40.165229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:09.644 #34 NEW cov: 12428 ft: 14822 corp: 25/694b lim: 40 exec/s: 34 rss: 75Mb L: 20/40 MS: 1 EraseBytes- 00:08:09.644 [2024-10-09 00:15:40.225317] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a303030 cdw11:30383030 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:09.644 [2024-10-09 00:15:40.225342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:09.644 [2024-10-09 00:15:40.225416] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:3030a5a5 cdw11:a5a5a5a5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:09.644 [2024-10-09 00:15:40.225430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:09.644 #35 NEW cov: 12428 ft: 14831 corp: 26/714b lim: 40 exec/s: 35 rss: 75Mb L: 20/40 MS: 1 EraseBytes- 00:08:09.644 [2024-10-09 00:15:40.265572] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:007a0000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:09.644 [2024-10-09 00:15:40.265598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:09.644 [2024-10-09 00:15:40.265657] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:30303030 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:09.644 [2024-10-09 00:15:40.265671] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:09.644 [2024-10-09 00:15:40.265724] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:38303030 cdw11:30a5a5a5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:09.644 [2024-10-09 00:15:40.265741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:09.901 #36 NEW cov: 12428 ft: 14850 corp: 27/741b lim: 40 exec/s: 36 rss: 75Mb L: 27/40 MS: 1 CrossOver- 00:08:09.901 [2024-10-09 00:15:40.326051] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a303030 cdw11:30383030 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:09.901 [2024-10-09 00:15:40.326077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:09.901 [2024-10-09 00:15:40.326134] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:3030a5a5 cdw11:a5a5a5a5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:09.901 [2024-10-09 00:15:40.326148] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:09.901 [2024-10-09 00:15:40.326207] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:a5a5a5a5 cdw11:a5a5a5a5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:09.901 [2024-10-09 00:15:40.326221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:09.901 [2024-10-09 00:15:40.326279] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:a5a5a5a5 cdw11:a5a5a5a5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:09.901 [2024-10-09 00:15:40.326292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:09.901 [2024-10-09 00:15:40.326349] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:5d303030 cdw11:30303031 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:09.901 [2024-10-09 00:15:40.326363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:09.901 #37 NEW cov: 12428 ft: 14919 corp: 28/781b lim: 40 exec/s: 37 rss: 75Mb L: 40/40 MS: 1 ChangeBinInt- 00:08:09.901 [2024-10-09 00:15:40.385968] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:007a0000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:09.901 [2024-10-09 00:15:40.385994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:09.901 [2024-10-09 00:15:40.386053] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00190000 cdw11:00002e00 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:09.901 [2024-10-09 00:15:40.386067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:09.901 [2024-10-09 00:15:40.386123] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:09.901 [2024-10-09 00:15:40.386136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:09.901 #38 NEW cov: 12428 ft: 14961 corp: 29/808b lim: 40 exec/s: 38 rss: 75Mb L: 27/40 MS: 1 CMP- DE: "\031\000\000\000"- 00:08:09.901 [2024-10-09 00:15:40.425827] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a303030 cdw11:30303030 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:09.901 [2024-10-09 00:15:40.425852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:09.901 [2024-10-09 00:15:40.425912] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:30303030 cdw11:303a3030 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:09.901 [2024-10-09 00:15:40.425926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:09.901 #39 NEW cov: 12428 ft: 14976 corp: 30/825b lim: 40 exec/s: 39 rss: 75Mb L: 17/40 MS: 1 ChangeByte- 00:08:09.901 [2024-10-09 00:15:40.486216] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a303030 cdw11:300a3030 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:09.901 [2024-10-09 00:15:40.486243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:09.901 [2024-10-09 00:15:40.486305] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:30303830 cdw11:303030a5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:09.901 [2024-10-09 00:15:40.486321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:09.901 [2024-10-09 00:15:40.486379] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:a5a5a5a5 cdw11:a538a530 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:09.901 [2024-10-09 00:15:40.486393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:09.901 #40 NEW cov: 12428 ft: 15014 corp: 31/851b lim: 40 exec/s: 40 rss: 75Mb L: 26/40 MS: 1 CrossOver- 00:08:10.160 [2024-10-09 00:15:40.546514] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a303030 cdw11:30303830 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.160 [2024-10-09 00:15:40.546542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:10.160 [2024-10-09 00:15:40.546600] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:30303030 cdw11:30303030 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.160 [2024-10-09 00:15:40.546614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:10.160 [2024-10-09 00:15:40.546667] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:30303030 cdw11:30383030 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.160 [2024-10-09 00:15:40.546681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:10.160 [2024-10-09 00:15:40.546738] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:30303030 cdw11:30303030 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.160 [2024-10-09 00:15:40.546751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:10.160 #41 NEW cov: 12428 ft: 15026 corp: 32/884b lim: 40 exec/s: 41 rss: 75Mb L: 33/40 MS: 1 CopyPart- 00:08:10.160 [2024-10-09 00:15:40.586450] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.160 [2024-10-09 00:15:40.586476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:10.160 [2024-10-09 00:15:40.586534] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00020000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.160 [2024-10-09 00:15:40.586548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:10.160 [2024-10-09 00:15:40.586603] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.160 [2024-10-09 00:15:40.586616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:10.160 #42 NEW cov: 12428 ft: 15034 corp: 33/911b lim: 40 exec/s: 42 rss: 75Mb L: 27/40 MS: 1 ChangeBit- 00:08:10.160 [2024-10-09 00:15:40.626582] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a303030 cdw11:30383030 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.160 [2024-10-09 00:15:40.626608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:10.160 [2024-10-09 00:15:40.626668] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:3030a5a5 cdw11:a521a5a5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.160 [2024-10-09 00:15:40.626682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:10.160 [2024-10-09 00:15:40.626735] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:a5a53030 cdw11:30303030 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.160 [2024-10-09 00:15:40.626749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:10.160 #43 NEW cov: 12428 ft: 15048 corp: 34/936b lim: 40 exec/s: 43 rss: 76Mb L: 25/40 MS: 1 InsertByte- 00:08:10.160 [2024-10-09 00:15:40.686541] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a307030 cdw11:30303030 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.160 [2024-10-09 00:15:40.686567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:10.160 [2024-10-09 00:15:40.686625] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:30303030 cdw11:30303030 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.160 [2024-10-09 00:15:40.686639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:10.160 #44 NEW cov: 12428 ft: 15087 corp: 35/953b lim: 40 exec/s: 22 rss: 76Mb L: 17/40 MS: 1 ChangeBit- 00:08:10.160 #44 DONE cov: 12428 ft: 15087 corp: 35/953b lim: 40 exec/s: 22 rss: 76Mb 00:08:10.160 ###### Recommended dictionary. ###### 00:08:10.160 "\031\000\000\000" # Uses: 0 00:08:10.160 ###### End of recommended dictionary. ###### 00:08:10.160 Done 44 runs in 2 second(s) 00:08:10.419 00:15:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_12.conf /var/tmp/suppress_nvmf_fuzz 00:08:10.419 00:15:40 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:10.419 00:15:40 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:10.419 00:15:40 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 13 1 0x1 00:08:10.419 00:15:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=13 00:08:10.419 00:15:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:10.419 00:15:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:10.419 00:15:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:08:10.419 00:15:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_13.conf 00:08:10.419 00:15:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:10.419 00:15:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:10.419 00:15:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 13 00:08:10.419 00:15:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4413 00:08:10.419 00:15:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:08:10.419 00:15:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' 00:08:10.419 00:15:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4413"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:10.419 00:15:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:10.419 00:15:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:10.419 00:15:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' -c /tmp/fuzz_json_13.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 -Z 13 00:08:10.419 [2024-10-09 00:15:40.881824] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:08:10.419 [2024-10-09 00:15:40.881891] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3889997 ] 00:08:10.678 [2024-10-09 00:15:41.085688] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.678 [2024-10-09 00:15:41.159376] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.678 [2024-10-09 00:15:41.218905] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:10.678 [2024-10-09 00:15:41.235140] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4413 *** 00:08:10.678 INFO: Running with entropic power schedule (0xFF, 100). 00:08:10.678 INFO: Seed: 3626218608 00:08:10.678 INFO: Loaded 1 modules (384346 inline 8-bit counters): 384346 [0x2bec84c, 0x2c4a5a6), 00:08:10.678 INFO: Loaded 1 PC tables (384346 PCs): 384346 [0x2c4a5a8,0x3227b48), 00:08:10.678 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:08:10.678 INFO: A corpus is not provided, starting from an empty corpus 00:08:10.678 #2 INITED exec/s: 0 rss: 66Mb 00:08:10.678 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:10.678 This may also happen if the target rejected all inputs we tried so far 00:08:10.678 [2024-10-09 00:15:41.301005] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:3f767676 cdw11:76767676 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.678 [2024-10-09 00:15:41.301034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:10.678 [2024-10-09 00:15:41.301095] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:76767676 cdw11:76767676 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.678 [2024-10-09 00:15:41.301110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:10.678 [2024-10-09 00:15:41.301170] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:76767676 cdw11:76767676 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.678 [2024-10-09 00:15:41.301183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:10.678 [2024-10-09 00:15:41.301241] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:76767676 cdw11:76767676 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.678 [2024-10-09 00:15:41.301254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:11.197 NEW_FUNC[1/714]: 0x44e138 in fuzz_admin_directive_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:257 00:08:11.197 NEW_FUNC[2/714]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:11.197 #9 NEW cov: 12161 ft: 12170 corp: 2/40b lim: 40 exec/s: 0 rss: 73Mb L: 39/39 MS: 2 InsertByte-InsertRepeatedBytes- 00:08:11.197 [2024-10-09 00:15:41.641985] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:3f767676 cdw11:76767676 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.197 [2024-10-09 00:15:41.642033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:11.197 [2024-10-09 00:15:41.642114] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:76767676 cdw11:76767676 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.197 [2024-10-09 00:15:41.642135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:11.197 [2024-10-09 00:15:41.642203] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:76767676 cdw11:76767676 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.197 [2024-10-09 00:15:41.642227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:11.197 [2024-10-09 00:15:41.642298] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:76767676 cdw11:76767676 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.197 [2024-10-09 00:15:41.642317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:11.197 #10 NEW cov: 12301 ft: 12755 corp: 3/77b lim: 40 exec/s: 0 rss: 73Mb L: 37/39 MS: 1 EraseBytes- 00:08:11.197 [2024-10-09 00:15:41.701573] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.197 [2024-10-09 00:15:41.701600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:11.197 #11 NEW cov: 12307 ft: 13670 corp: 4/90b lim: 40 exec/s: 0 rss: 74Mb L: 13/39 MS: 1 InsertRepeatedBytes- 00:08:11.197 [2024-10-09 00:15:41.741921] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffe2e2e2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.197 [2024-10-09 00:15:41.741948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:11.197 [2024-10-09 00:15:41.742025] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:e2e2e2e2 cdw11:e2e2e2e2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.197 [2024-10-09 00:15:41.742040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:11.197 [2024-10-09 00:15:41.742099] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:e2e2e2e2 cdw11:e2ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.197 [2024-10-09 00:15:41.742113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:11.197 #12 NEW cov: 12392 ft: 14099 corp: 5/119b lim: 40 exec/s: 0 rss: 74Mb L: 29/39 MS: 1 InsertRepeatedBytes- 00:08:11.197 [2024-10-09 00:15:41.802220] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:3f767676 cdw11:76767676 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.197 [2024-10-09 00:15:41.802247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:11.197 [2024-10-09 00:15:41.802324] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:76767676 cdw11:76767076 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.197 [2024-10-09 00:15:41.802339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:11.197 [2024-10-09 00:15:41.802397] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:76767676 cdw11:76767676 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.197 [2024-10-09 00:15:41.802412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:11.197 [2024-10-09 00:15:41.802472] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:76767676 cdw11:76767676 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.197 [2024-10-09 00:15:41.802485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:11.197 #13 NEW cov: 12392 ft: 14230 corp: 6/158b lim: 40 exec/s: 0 rss: 74Mb L: 39/39 MS: 1 ChangeBinInt- 00:08:11.454 [2024-10-09 00:15:41.842334] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffe2e2e2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.454 [2024-10-09 00:15:41.842362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:11.454 [2024-10-09 00:15:41.842425] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:e2e2e2e2 cdw11:e2e2e2e2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.454 [2024-10-09 00:15:41.842438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:11.454 [2024-10-09 00:15:41.842496] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:e2e2e2e2 cdw11:e2ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.454 [2024-10-09 00:15:41.842511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:11.454 [2024-10-09 00:15:41.842569] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffff0b cdw11:0b0b0b0b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.454 [2024-10-09 00:15:41.842583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:11.454 #14 NEW cov: 12392 ft: 14298 corp: 7/195b lim: 40 exec/s: 0 rss: 74Mb L: 37/39 MS: 1 InsertRepeatedBytes- 00:08:11.454 [2024-10-09 00:15:41.902628] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:3f767676 cdw11:76767676 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.454 [2024-10-09 00:15:41.902654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:11.454 [2024-10-09 00:15:41.902713] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:76767676 cdw11:76767076 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.454 [2024-10-09 00:15:41.902728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:11.455 [2024-10-09 00:15:41.902800] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:76767676 cdw11:76767676 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.455 [2024-10-09 00:15:41.902819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:11.455 [2024-10-09 00:15:41.902878] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:76767676 cdw11:765b7676 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.455 [2024-10-09 00:15:41.902892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:11.455 [2024-10-09 00:15:41.902949] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:76767676 cdw11:7676760a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.455 [2024-10-09 00:15:41.902962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:11.455 #15 NEW cov: 12392 ft: 14423 corp: 8/235b lim: 40 exec/s: 0 rss: 74Mb L: 40/40 MS: 1 InsertByte- 00:08:11.455 [2024-10-09 00:15:41.962534] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.455 [2024-10-09 00:15:41.962561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:11.455 [2024-10-09 00:15:41.962622] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ff0affff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.455 [2024-10-09 00:15:41.962636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:11.455 [2024-10-09 00:15:41.962693] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.455 [2024-10-09 00:15:41.962706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:11.455 #16 NEW cov: 12392 ft: 14486 corp: 9/261b lim: 40 exec/s: 0 rss: 74Mb L: 26/40 MS: 1 CopyPart- 00:08:11.455 [2024-10-09 00:15:42.002784] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:3f767676 cdw11:76767676 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.455 [2024-10-09 00:15:42.002811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:11.455 [2024-10-09 00:15:42.002875] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:76767676 cdw11:76767676 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.455 [2024-10-09 00:15:42.002888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:11.455 [2024-10-09 00:15:42.002946] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:76767676 cdw11:76767676 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.455 [2024-10-09 00:15:42.002960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:11.455 [2024-10-09 00:15:42.003018] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:76777676 cdw11:76767676 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.455 [2024-10-09 00:15:42.003032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:11.455 #17 NEW cov: 12392 ft: 14553 corp: 10/300b lim: 40 exec/s: 0 rss: 74Mb L: 39/40 MS: 1 ChangeBit- 00:08:11.455 [2024-10-09 00:15:42.042776] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ff0affff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.455 [2024-10-09 00:15:42.042803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:11.455 [2024-10-09 00:15:42.042885] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ff0affff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.455 [2024-10-09 00:15:42.042899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:11.455 [2024-10-09 00:15:42.042971] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.455 [2024-10-09 00:15:42.042985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:11.455 #18 NEW cov: 12392 ft: 14588 corp: 11/326b lim: 40 exec/s: 0 rss: 74Mb L: 26/40 MS: 1 CopyPart- 00:08:11.713 [2024-10-09 00:15:42.103195] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:3f767676 cdw11:76767676 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.713 [2024-10-09 00:15:42.103222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:11.714 [2024-10-09 00:15:42.103281] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:76767676 cdw11:76767676 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.714 [2024-10-09 00:15:42.103295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:11.714 [2024-10-09 00:15:42.103352] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:76767676 cdw11:76767676 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.714 [2024-10-09 00:15:42.103365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:11.714 [2024-10-09 00:15:42.103422] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:76777676 cdw11:76767676 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.714 [2024-10-09 00:15:42.103436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:11.714 [2024-10-09 00:15:42.103498] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:76767676 cdw11:7676760a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.714 [2024-10-09 00:15:42.103512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:11.714 #19 NEW cov: 12392 ft: 14640 corp: 12/366b lim: 40 exec/s: 0 rss: 74Mb L: 40/40 MS: 1 CopyPart- 00:08:11.714 [2024-10-09 00:15:42.162868] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.714 [2024-10-09 00:15:42.162894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:11.714 NEW_FUNC[1/1]: 0x1bfc8d8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:658 00:08:11.714 #20 NEW cov: 12415 ft: 14720 corp: 13/379b lim: 40 exec/s: 0 rss: 74Mb L: 13/40 MS: 1 ChangeByte- 00:08:11.714 [2024-10-09 00:15:42.203092] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.714 [2024-10-09 00:15:42.203118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:11.714 [2024-10-09 00:15:42.203176] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:29ff7676 cdw11:7676ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.714 [2024-10-09 00:15:42.203190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:11.714 #21 NEW cov: 12415 ft: 14935 corp: 14/396b lim: 40 exec/s: 0 rss: 74Mb L: 17/40 MS: 1 CrossOver- 00:08:11.714 [2024-10-09 00:15:42.263246] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.714 [2024-10-09 00:15:42.263272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:11.714 [2024-10-09 00:15:42.263351] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ff0affff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.714 [2024-10-09 00:15:42.263365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:11.714 #22 NEW cov: 12415 ft: 14977 corp: 15/418b lim: 40 exec/s: 22 rss: 74Mb L: 22/40 MS: 1 EraseBytes- 00:08:11.714 [2024-10-09 00:15:42.303344] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.714 [2024-10-09 00:15:42.303369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:11.714 [2024-10-09 00:15:42.303446] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:fffeffff cdw11:ff0affff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.714 [2024-10-09 00:15:42.303462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:11.714 #23 NEW cov: 12415 ft: 15023 corp: 16/440b lim: 40 exec/s: 23 rss: 74Mb L: 22/40 MS: 1 ChangeBit- 00:08:11.972 [2024-10-09 00:15:42.363674] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ff0affff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.972 [2024-10-09 00:15:42.363700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:11.972 [2024-10-09 00:15:42.363776] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ff0affff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.972 [2024-10-09 00:15:42.363793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:11.972 [2024-10-09 00:15:42.363857] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:fffffffe SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.972 [2024-10-09 00:15:42.363871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:11.972 #24 NEW cov: 12415 ft: 15037 corp: 17/466b lim: 40 exec/s: 24 rss: 75Mb L: 26/40 MS: 1 CrossOver- 00:08:11.972 [2024-10-09 00:15:42.424093] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:3f767676 cdw11:76767676 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.972 [2024-10-09 00:15:42.424119] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:11.972 [2024-10-09 00:15:42.424178] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:767676ff cdw11:76767676 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.972 [2024-10-09 00:15:42.424192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:11.972 [2024-10-09 00:15:42.424265] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:76767676 cdw11:76767676 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.972 [2024-10-09 00:15:42.424279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:11.972 [2024-10-09 00:15:42.424335] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:76767676 cdw11:76767676 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.972 [2024-10-09 00:15:42.424349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:11.972 [2024-10-09 00:15:42.424408] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:76767676 cdw11:7676760a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.972 [2024-10-09 00:15:42.424423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:11.972 #25 NEW cov: 12415 ft: 15065 corp: 18/506b lim: 40 exec/s: 25 rss: 75Mb L: 40/40 MS: 1 CrossOver- 00:08:11.972 [2024-10-09 00:15:42.464089] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:3f767676 cdw11:76767676 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.972 [2024-10-09 00:15:42.464114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:11.972 [2024-10-09 00:15:42.464194] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:76767676 cdw11:76767076 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.972 [2024-10-09 00:15:42.464208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:11.972 [2024-10-09 00:15:42.464265] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:76767676 cdw11:76767676 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.972 [2024-10-09 00:15:42.464278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:11.972 [2024-10-09 00:15:42.464340] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:76767676 cdw11:76767676 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.972 [2024-10-09 00:15:42.464355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:11.972 #26 NEW cov: 12415 ft: 15146 corp: 19/545b lim: 40 exec/s: 26 rss: 75Mb L: 39/40 MS: 1 ChangeBit- 00:08:11.972 [2024-10-09 00:15:42.504365] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:3f767676 cdw11:76767676 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.972 [2024-10-09 00:15:42.504394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:11.972 [2024-10-09 00:15:42.504474] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:76767676 cdw11:76767076 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.972 [2024-10-09 00:15:42.504488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:11.972 [2024-10-09 00:15:42.504547] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:8a927676 cdw11:76767676 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.972 [2024-10-09 00:15:42.504560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:11.972 [2024-10-09 00:15:42.504617] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:76767676 cdw11:765b7676 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.972 [2024-10-09 00:15:42.504632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:11.972 [2024-10-09 00:15:42.504691] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:76767676 cdw11:7676760a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.972 [2024-10-09 00:15:42.504705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:11.972 #27 NEW cov: 12415 ft: 15188 corp: 20/585b lim: 40 exec/s: 27 rss: 75Mb L: 40/40 MS: 1 ChangeBinInt- 00:08:11.972 [2024-10-09 00:15:42.564228] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffb5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.973 [2024-10-09 00:15:42.564254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:11.973 [2024-10-09 00:15:42.564328] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:b5b5b5b5 cdw11:b5b5b5b5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.973 [2024-10-09 00:15:42.564343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:11.973 [2024-10-09 00:15:42.564400] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:b5b5b5b5 cdw11:b5ff29ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.973 [2024-10-09 00:15:42.564414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:11.973 #28 NEW cov: 12415 ft: 15220 corp: 21/612b lim: 40 exec/s: 28 rss: 75Mb L: 27/40 MS: 1 InsertRepeatedBytes- 00:08:11.973 [2024-10-09 00:15:42.604649] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:3f767676 cdw11:76767676 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.973 [2024-10-09 00:15:42.604676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:11.973 [2024-10-09 00:15:42.604741] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:767676ff cdw11:76767676 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.973 [2024-10-09 00:15:42.604754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:11.973 [2024-10-09 00:15:42.604818] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:76767676 cdw11:76767676 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.973 [2024-10-09 00:15:42.604832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:11.973 [2024-10-09 00:15:42.604890] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:76767676 cdw11:76767676 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.973 [2024-10-09 00:15:42.604908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:11.973 [2024-10-09 00:15:42.604962] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:76767676 cdw11:7676760a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.973 [2024-10-09 00:15:42.604976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:12.231 #29 NEW cov: 12415 ft: 15298 corp: 22/652b lim: 40 exec/s: 29 rss: 75Mb L: 40/40 MS: 1 ShuffleBytes- 00:08:12.231 [2024-10-09 00:15:42.664798] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:3f767677 cdw11:76767676 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.231 [2024-10-09 00:15:42.664828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:12.231 [2024-10-09 00:15:42.664903] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:76767676 cdw11:76767076 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.231 [2024-10-09 00:15:42.664917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:12.231 [2024-10-09 00:15:42.664977] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:8a927676 cdw11:76767676 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.231 [2024-10-09 00:15:42.664991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:12.231 [2024-10-09 00:15:42.665048] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:76767676 cdw11:765b7676 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.231 [2024-10-09 00:15:42.665062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:12.231 [2024-10-09 00:15:42.665120] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:76767676 cdw11:7676760a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.231 [2024-10-09 00:15:42.665134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:12.231 #30 NEW cov: 12415 ft: 15305 corp: 23/692b lim: 40 exec/s: 30 rss: 75Mb L: 40/40 MS: 1 ChangeBit- 00:08:12.231 [2024-10-09 00:15:42.724523] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.231 [2024-10-09 00:15:42.724548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:12.231 [2024-10-09 00:15:42.724608] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:2923ff76 cdw11:767676ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.231 [2024-10-09 00:15:42.724622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:12.231 #31 NEW cov: 12415 ft: 15320 corp: 24/710b lim: 40 exec/s: 31 rss: 75Mb L: 18/40 MS: 1 InsertByte- 00:08:12.231 [2024-10-09 00:15:42.784831] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:3f767676 cdw11:76767676 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.231 [2024-10-09 00:15:42.784856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:12.231 [2024-10-09 00:15:42.784936] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:76767676 cdw11:76767676 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.231 [2024-10-09 00:15:42.784950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:12.231 [2024-10-09 00:15:42.785010] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:76765b76 cdw11:76767676 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.231 [2024-10-09 00:15:42.785027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:12.231 #32 NEW cov: 12415 ft: 15327 corp: 25/739b lim: 40 exec/s: 32 rss: 75Mb L: 29/40 MS: 1 EraseBytes- 00:08:12.231 [2024-10-09 00:15:42.824951] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffb4 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.231 [2024-10-09 00:15:42.824976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:12.231 [2024-10-09 00:15:42.825053] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:b5b5b5b5 cdw11:b5b5b5b5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.231 [2024-10-09 00:15:42.825068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:12.231 [2024-10-09 00:15:42.825125] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:b5b5b5b5 cdw11:b5ff29ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.232 [2024-10-09 00:15:42.825139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:12.490 #33 NEW cov: 12415 ft: 15344 corp: 26/766b lim: 40 exec/s: 33 rss: 75Mb L: 27/40 MS: 1 ChangeBinInt- 00:08:12.490 [2024-10-09 00:15:42.885371] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:3f767677 cdw11:76767674 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.490 [2024-10-09 00:15:42.885396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:12.490 [2024-10-09 00:15:42.885457] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:76767676 cdw11:76767076 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.490 [2024-10-09 00:15:42.885471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:12.490 [2024-10-09 00:15:42.885528] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:8a927676 cdw11:76767676 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.490 [2024-10-09 00:15:42.885542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:12.490 [2024-10-09 00:15:42.885597] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:76767676 cdw11:765b7676 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.490 [2024-10-09 00:15:42.885611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:12.490 [2024-10-09 00:15:42.885668] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:76767676 cdw11:7676760a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.490 [2024-10-09 00:15:42.885681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:12.490 #34 NEW cov: 12415 ft: 15383 corp: 27/806b lim: 40 exec/s: 34 rss: 75Mb L: 40/40 MS: 1 ChangeBit- 00:08:12.490 [2024-10-09 00:15:42.945303] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ff0affff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.490 [2024-10-09 00:15:42.945328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:12.490 [2024-10-09 00:15:42.945404] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:1a00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.490 [2024-10-09 00:15:42.945419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:12.490 [2024-10-09 00:15:42.945482] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.490 [2024-10-09 00:15:42.945496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:12.490 #35 NEW cov: 12415 ft: 15389 corp: 28/832b lim: 40 exec/s: 35 rss: 75Mb L: 26/40 MS: 1 ChangeBinInt- 00:08:12.490 [2024-10-09 00:15:42.985276] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.490 [2024-10-09 00:15:42.985301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:12.490 [2024-10-09 00:15:42.985375] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ff0affff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.490 [2024-10-09 00:15:42.985389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:12.490 #36 NEW cov: 12415 ft: 15400 corp: 29/855b lim: 40 exec/s: 36 rss: 75Mb L: 23/40 MS: 1 EraseBytes- 00:08:12.490 [2024-10-09 00:15:43.025511] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ff0affff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.490 [2024-10-09 00:15:43.025538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:12.490 [2024-10-09 00:15:43.025614] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffff1a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.490 [2024-10-09 00:15:43.025629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:12.490 [2024-10-09 00:15:43.025688] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.490 [2024-10-09 00:15:43.025702] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:12.490 #37 NEW cov: 12415 ft: 15424 corp: 30/881b lim: 40 exec/s: 37 rss: 75Mb L: 26/40 MS: 1 CopyPart- 00:08:12.490 [2024-10-09 00:15:43.085931] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:3f767676 cdw11:76767676 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.490 [2024-10-09 00:15:43.085957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:12.490 [2024-10-09 00:15:43.086015] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:76767676 cdw11:76767000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.490 [2024-10-09 00:15:43.086029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:12.490 [2024-10-09 00:15:43.086088] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00002876 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.490 [2024-10-09 00:15:43.086101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:12.490 [2024-10-09 00:15:43.086160] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:76767676 cdw11:765b7676 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.490 [2024-10-09 00:15:43.086174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:12.490 [2024-10-09 00:15:43.086233] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:76767676 cdw11:7676760a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.490 [2024-10-09 00:15:43.086247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:12.490 #38 NEW cov: 12415 ft: 15434 corp: 31/921b lim: 40 exec/s: 38 rss: 75Mb L: 40/40 MS: 1 ChangeBinInt- 00:08:12.749 [2024-10-09 00:15:43.125807] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffe2e2e2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.749 [2024-10-09 00:15:43.125837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:12.749 [2024-10-09 00:15:43.125899] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:e2e2e2e2 cdw11:e2e2e2e2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.749 [2024-10-09 00:15:43.125913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:12.749 [2024-10-09 00:15:43.125973] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:e2e2e2e2 cdw11:e2ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.749 [2024-10-09 00:15:43.125987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:12.749 #39 NEW cov: 12415 ft: 15452 corp: 32/951b lim: 40 exec/s: 39 rss: 75Mb L: 30/40 MS: 1 InsertByte- 00:08:12.749 [2024-10-09 00:15:43.166158] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:3f767676 cdw11:76767676 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.749 [2024-10-09 00:15:43.166186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:12.749 [2024-10-09 00:15:43.166245] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:767676ff cdw11:76767676 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.749 [2024-10-09 00:15:43.166259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:12.749 [2024-10-09 00:15:43.166318] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:76767676 cdw11:76767676 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.749 [2024-10-09 00:15:43.166332] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:12.749 [2024-10-09 00:15:43.166391] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:76767676 cdw11:76767676 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.749 [2024-10-09 00:15:43.166406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:12.749 [2024-10-09 00:15:43.166465] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:76767676 cdw11:7676760a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.749 [2024-10-09 00:15:43.166479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:12.749 #40 NEW cov: 12415 ft: 15461 corp: 33/991b lim: 40 exec/s: 40 rss: 75Mb L: 40/40 MS: 1 ShuffleBytes- 00:08:12.749 [2024-10-09 00:15:43.226087] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffff8dd6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.749 [2024-10-09 00:15:43.226114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:12.749 [2024-10-09 00:15:43.226174] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:250fa118 cdw11:2700b5b5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.749 [2024-10-09 00:15:43.226189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:12.749 [2024-10-09 00:15:43.226249] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:b5b5b5b5 cdw11:b5ff29ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.749 [2024-10-09 00:15:43.226267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:12.749 #41 NEW cov: 12415 ft: 15463 corp: 34/1018b lim: 40 exec/s: 41 rss: 75Mb L: 27/40 MS: 1 CMP- DE: "\215\326%\017\241\030'\000"- 00:08:12.749 [2024-10-09 00:15:43.286395] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:3f767676 cdw11:76767676 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.749 [2024-10-09 00:15:43.286421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:12.749 [2024-10-09 00:15:43.286481] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:76767676 cdw11:76767676 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.749 [2024-10-09 00:15:43.286496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:12.749 [2024-10-09 00:15:43.286556] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:76767676 cdw11:76767676 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.749 [2024-10-09 00:15:43.286571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:12.749 [2024-10-09 00:15:43.286631] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:76767676 cdw11:76767676 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.749 [2024-10-09 00:15:43.286645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:12.749 #42 NEW cov: 12415 ft: 15467 corp: 35/1057b lim: 40 exec/s: 21 rss: 75Mb L: 39/40 MS: 1 ShuffleBytes- 00:08:12.749 #42 DONE cov: 12415 ft: 15467 corp: 35/1057b lim: 40 exec/s: 21 rss: 75Mb 00:08:12.749 ###### Recommended dictionary. ###### 00:08:12.749 "\215\326%\017\241\030'\000" # Uses: 0 00:08:12.749 ###### End of recommended dictionary. ###### 00:08:12.749 Done 42 runs in 2 second(s) 00:08:13.008 00:15:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_13.conf /var/tmp/suppress_nvmf_fuzz 00:08:13.008 00:15:43 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:13.008 00:15:43 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:13.008 00:15:43 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 14 1 0x1 00:08:13.008 00:15:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=14 00:08:13.008 00:15:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:13.008 00:15:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:13.008 00:15:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:08:13.008 00:15:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_14.conf 00:08:13.008 00:15:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:13.008 00:15:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:13.008 00:15:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 14 00:08:13.008 00:15:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4414 00:08:13.008 00:15:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:08:13.008 00:15:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' 00:08:13.008 00:15:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4414"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:13.008 00:15:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:13.008 00:15:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:13.008 00:15:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' -c /tmp/fuzz_json_14.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 -Z 14 00:08:13.008 [2024-10-09 00:15:43.480898] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:08:13.008 [2024-10-09 00:15:43.480968] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3890349 ] 00:08:13.267 [2024-10-09 00:15:43.679196] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.267 [2024-10-09 00:15:43.752974] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.267 [2024-10-09 00:15:43.811934] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:13.267 [2024-10-09 00:15:43.828173] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4414 *** 00:08:13.267 INFO: Running with entropic power schedule (0xFF, 100). 00:08:13.267 INFO: Seed: 1922239633 00:08:13.267 INFO: Loaded 1 modules (384346 inline 8-bit counters): 384346 [0x2bec84c, 0x2c4a5a6), 00:08:13.267 INFO: Loaded 1 PC tables (384346 PCs): 384346 [0x2c4a5a8,0x3227b48), 00:08:13.267 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:08:13.267 INFO: A corpus is not provided, starting from an empty corpus 00:08:13.267 #2 INITED exec/s: 0 rss: 67Mb 00:08:13.267 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:13.267 This may also happen if the target rejected all inputs we tried so far 00:08:13.267 [2024-10-09 00:15:43.877706] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.267 [2024-10-09 00:15:43.877737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:13.267 [2024-10-09 00:15:43.877795] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.267 [2024-10-09 00:15:43.877810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:13.267 [2024-10-09 00:15:43.877886] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.267 [2024-10-09 00:15:43.877903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:13.783 NEW_FUNC[1/716]: 0x44fd08 in fuzz_admin_set_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:392 00:08:13.783 NEW_FUNC[2/716]: 0x471258 in feat_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:340 00:08:13.783 #8 NEW cov: 12193 ft: 12192 corp: 2/23b lim: 35 exec/s: 0 rss: 74Mb L: 22/22 MS: 1 InsertRepeatedBytes- 00:08:13.783 [2024-10-09 00:15:44.208835] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000007e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.783 [2024-10-09 00:15:44.208872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:13.783 [2024-10-09 00:15:44.208932] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000035 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.783 [2024-10-09 00:15:44.208946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:13.783 [2024-10-09 00:15:44.209002] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000035 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.783 [2024-10-09 00:15:44.209016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:13.783 [2024-10-09 00:15:44.209071] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000035 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.783 [2024-10-09 00:15:44.209088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:13.783 #16 NEW cov: 12313 ft: 13155 corp: 3/51b lim: 35 exec/s: 0 rss: 75Mb L: 28/28 MS: 3 ChangeByte-InsertByte-InsertRepeatedBytes- 00:08:13.783 [2024-10-09 00:15:44.248711] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000007e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.783 [2024-10-09 00:15:44.248739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:13.783 [2024-10-09 00:15:44.248799] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000035 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.783 [2024-10-09 00:15:44.248817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:13.783 [2024-10-09 00:15:44.248892] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000035 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.783 [2024-10-09 00:15:44.248907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:13.783 [2024-10-09 00:15:44.248965] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000035 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.783 [2024-10-09 00:15:44.248980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:13.783 #17 NEW cov: 12319 ft: 13298 corp: 4/79b lim: 35 exec/s: 0 rss: 75Mb L: 28/28 MS: 1 ShuffleBytes- 00:08:13.783 [2024-10-09 00:15:44.308723] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.783 [2024-10-09 00:15:44.308753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:13.783 [2024-10-09 00:15:44.308816] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.783 [2024-10-09 00:15:44.308849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:13.783 [2024-10-09 00:15:44.308909] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.784 [2024-10-09 00:15:44.308926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:13.784 #18 NEW cov: 12404 ft: 13570 corp: 5/105b lim: 35 exec/s: 0 rss: 75Mb L: 26/28 MS: 1 InsertRepeatedBytes- 00:08:13.784 [2024-10-09 00:15:44.368904] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000007e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.784 [2024-10-09 00:15:44.368929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:13.784 [2024-10-09 00:15:44.368991] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000035 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.784 [2024-10-09 00:15:44.369008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:13.784 [2024-10-09 00:15:44.369063] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000035 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.784 [2024-10-09 00:15:44.369078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:13.784 #19 NEW cov: 12404 ft: 13684 corp: 6/128b lim: 35 exec/s: 0 rss: 75Mb L: 23/28 MS: 1 EraseBytes- 00:08:13.784 [2024-10-09 00:15:44.409042] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000007e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.784 [2024-10-09 00:15:44.409069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:13.784 [2024-10-09 00:15:44.409129] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000035 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.784 [2024-10-09 00:15:44.409143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:13.784 [2024-10-09 00:15:44.409204] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000035 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.784 [2024-10-09 00:15:44.409218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:14.041 #20 NEW cov: 12404 ft: 13747 corp: 7/151b lim: 35 exec/s: 0 rss: 75Mb L: 23/28 MS: 1 CopyPart- 00:08:14.041 [2024-10-09 00:15:44.469212] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.041 [2024-10-09 00:15:44.469241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:14.041 [2024-10-09 00:15:44.469317] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000fe SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.041 [2024-10-09 00:15:44.469334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:14.041 [2024-10-09 00:15:44.469395] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.041 [2024-10-09 00:15:44.469412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:14.042 #21 NEW cov: 12404 ft: 13845 corp: 8/177b lim: 35 exec/s: 0 rss: 75Mb L: 26/28 MS: 1 ChangeBit- 00:08:14.042 [2024-10-09 00:15:44.529373] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000007e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.042 [2024-10-09 00:15:44.529399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:14.042 [2024-10-09 00:15:44.529460] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000035 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.042 [2024-10-09 00:15:44.529475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:14.042 [2024-10-09 00:15:44.529536] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000035 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.042 [2024-10-09 00:15:44.529550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:14.042 #22 NEW cov: 12404 ft: 13884 corp: 9/201b lim: 35 exec/s: 0 rss: 75Mb L: 24/28 MS: 1 InsertByte- 00:08:14.042 [2024-10-09 00:15:44.569144] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000007e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.042 [2024-10-09 00:15:44.569168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:14.042 #23 NEW cov: 12404 ft: 14693 corp: 10/209b lim: 35 exec/s: 0 rss: 75Mb L: 8/28 MS: 1 CrossOver- 00:08:14.042 [2024-10-09 00:15:44.609576] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.042 [2024-10-09 00:15:44.609604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:14.042 [2024-10-09 00:15:44.609665] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.042 [2024-10-09 00:15:44.609682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:14.042 [2024-10-09 00:15:44.609745] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:8000008b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.042 [2024-10-09 00:15:44.609762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:14.042 #24 NEW cov: 12404 ft: 14752 corp: 11/232b lim: 35 exec/s: 0 rss: 75Mb L: 23/28 MS: 1 InsertByte- 00:08:14.042 [2024-10-09 00:15:44.649746] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.042 [2024-10-09 00:15:44.649774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:14.042 [2024-10-09 00:15:44.649833] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.042 [2024-10-09 00:15:44.649850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:14.042 [2024-10-09 00:15:44.649911] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.042 [2024-10-09 00:15:44.649927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:14.042 #25 NEW cov: 12404 ft: 14803 corp: 12/258b lim: 35 exec/s: 0 rss: 75Mb L: 26/28 MS: 1 ChangeByte- 00:08:14.300 [2024-10-09 00:15:44.689837] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.300 [2024-10-09 00:15:44.689867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:14.300 [2024-10-09 00:15:44.689927] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.300 [2024-10-09 00:15:44.689943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:14.300 [2024-10-09 00:15:44.690004] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:8000008b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.300 [2024-10-09 00:15:44.690021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:14.300 #26 NEW cov: 12404 ft: 14927 corp: 13/281b lim: 35 exec/s: 0 rss: 75Mb L: 23/28 MS: 1 ChangeBit- 00:08:14.300 [2024-10-09 00:15:44.750023] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000007e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.300 [2024-10-09 00:15:44.750047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:14.300 [2024-10-09 00:15:44.750120] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000035 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.300 [2024-10-09 00:15:44.750135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:14.300 [2024-10-09 00:15:44.750193] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000035 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.300 [2024-10-09 00:15:44.750207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:14.300 NEW_FUNC[1/1]: 0x1bfc8d8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:658 00:08:14.300 #27 NEW cov: 12427 ft: 14985 corp: 14/305b lim: 35 exec/s: 0 rss: 75Mb L: 24/28 MS: 1 ChangeByte- 00:08:14.300 [2024-10-09 00:15:44.810277] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.300 [2024-10-09 00:15:44.810304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:14.300 [2024-10-09 00:15:44.810366] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.300 [2024-10-09 00:15:44.810383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:14.300 [2024-10-09 00:15:44.810439] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.300 [2024-10-09 00:15:44.810456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:14.300 [2024-10-09 00:15:44.810515] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.300 [2024-10-09 00:15:44.810531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:14.300 #28 NEW cov: 12427 ft: 15006 corp: 15/335b lim: 35 exec/s: 0 rss: 75Mb L: 30/30 MS: 1 CMP- DE: "\001\000\000\000"- 00:08:14.300 [2024-10-09 00:15:44.850266] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.300 [2024-10-09 00:15:44.850294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:14.300 [2024-10-09 00:15:44.850356] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.300 [2024-10-09 00:15:44.850374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:14.300 [2024-10-09 00:15:44.850434] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.300 [2024-10-09 00:15:44.850450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:14.300 #29 NEW cov: 12427 ft: 15028 corp: 16/361b lim: 35 exec/s: 29 rss: 75Mb L: 26/30 MS: 1 ChangeBit- 00:08:14.300 [2024-10-09 00:15:44.910128] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000007e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.300 [2024-10-09 00:15:44.910153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:14.559 #30 NEW cov: 12427 ft: 15058 corp: 17/369b lim: 35 exec/s: 30 rss: 75Mb L: 8/30 MS: 1 ChangeBit- 00:08:14.559 [2024-10-09 00:15:44.970591] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.559 [2024-10-09 00:15:44.970618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:14.559 [2024-10-09 00:15:44.970677] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.559 [2024-10-09 00:15:44.970693] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:14.559 [2024-10-09 00:15:44.970751] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:0000001a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.559 [2024-10-09 00:15:44.970765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:14.559 #31 NEW cov: 12427 ft: 15119 corp: 18/395b lim: 35 exec/s: 31 rss: 75Mb L: 26/30 MS: 1 ChangeBinInt- 00:08:14.559 [2024-10-09 00:15:45.010877] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000007e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.559 [2024-10-09 00:15:45.010902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:14.559 [2024-10-09 00:15:45.010963] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000035 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.559 [2024-10-09 00:15:45.010980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:14.559 [2024-10-09 00:15:45.011038] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000035 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.559 [2024-10-09 00:15:45.011052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:14.559 [2024-10-09 00:15:45.011125] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000d2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.559 [2024-10-09 00:15:45.011142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:14.559 #32 NEW cov: 12427 ft: 15146 corp: 19/428b lim: 35 exec/s: 32 rss: 75Mb L: 33/33 MS: 1 InsertRepeatedBytes- 00:08:14.559 [2024-10-09 00:15:45.050978] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.559 [2024-10-09 00:15:45.051005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:14.559 [2024-10-09 00:15:45.051068] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.559 [2024-10-09 00:15:45.051084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:14.559 [2024-10-09 00:15:45.051144] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.559 [2024-10-09 00:15:45.051161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:14.559 [2024-10-09 00:15:45.051219] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.559 [2024-10-09 00:15:45.051234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:14.559 #33 NEW cov: 12427 ft: 15159 corp: 20/456b lim: 35 exec/s: 33 rss: 75Mb L: 28/33 MS: 1 CopyPart- 00:08:14.559 [2024-10-09 00:15:45.091077] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.559 [2024-10-09 00:15:45.091104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:14.559 [2024-10-09 00:15:45.091165] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.559 [2024-10-09 00:15:45.091181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:14.559 [2024-10-09 00:15:45.091241] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.559 [2024-10-09 00:15:45.091257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:14.559 [2024-10-09 00:15:45.091317] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.559 [2024-10-09 00:15:45.091334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:14.559 #34 NEW cov: 12427 ft: 15165 corp: 21/484b lim: 35 exec/s: 34 rss: 76Mb L: 28/33 MS: 1 CopyPart- 00:08:14.559 [2024-10-09 00:15:45.151438] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.559 [2024-10-09 00:15:45.151465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:14.559 [2024-10-09 00:15:45.151529] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.559 [2024-10-09 00:15:45.151547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:14.559 [2024-10-09 00:15:45.151604] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.559 [2024-10-09 00:15:45.151621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:14.559 [2024-10-09 00:15:45.151679] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.560 [2024-10-09 00:15:45.151694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:14.560 [2024-10-09 00:15:45.151752] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:800000f4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.560 [2024-10-09 00:15:45.151767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:14.818 #35 NEW cov: 12427 ft: 15222 corp: 22/519b lim: 35 exec/s: 35 rss: 76Mb L: 35/35 MS: 1 InsertRepeatedBytes- 00:08:14.818 [2024-10-09 00:15:45.211425] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000007e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.818 [2024-10-09 00:15:45.211450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:14.818 [2024-10-09 00:15:45.211511] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000035 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.818 [2024-10-09 00:15:45.211525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:14.818 [2024-10-09 00:15:45.211581] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000035 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.818 [2024-10-09 00:15:45.211596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:14.818 [2024-10-09 00:15:45.211652] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000035 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.818 [2024-10-09 00:15:45.211666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:14.818 #36 NEW cov: 12427 ft: 15229 corp: 23/547b lim: 35 exec/s: 36 rss: 76Mb L: 28/35 MS: 1 ChangeBinInt- 00:08:14.818 [2024-10-09 00:15:45.251509] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.818 [2024-10-09 00:15:45.251534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:14.818 [2024-10-09 00:15:45.251599] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.818 [2024-10-09 00:15:45.251616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:14.818 [2024-10-09 00:15:45.251676] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.818 [2024-10-09 00:15:45.251693] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:14.818 [2024-10-09 00:15:45.251755] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.818 [2024-10-09 00:15:45.251776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:14.818 #37 NEW cov: 12427 ft: 15252 corp: 24/575b lim: 35 exec/s: 37 rss: 76Mb L: 28/35 MS: 1 ChangeBinInt- 00:08:14.818 [2024-10-09 00:15:45.291351] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.818 [2024-10-09 00:15:45.291379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:14.818 [2024-10-09 00:15:45.291441] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.818 [2024-10-09 00:15:45.291457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:14.818 #38 NEW cov: 12427 ft: 15430 corp: 25/590b lim: 35 exec/s: 38 rss: 76Mb L: 15/35 MS: 1 EraseBytes- 00:08:14.818 [2024-10-09 00:15:45.351575] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000007e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.818 [2024-10-09 00:15:45.351600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:14.818 [2024-10-09 00:15:45.351660] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000035 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.818 [2024-10-09 00:15:45.351675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:14.818 [2024-10-09 00:15:45.351736] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000035 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.818 [2024-10-09 00:15:45.351752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:14.818 #39 NEW cov: 12427 ft: 15455 corp: 26/613b lim: 35 exec/s: 39 rss: 76Mb L: 23/35 MS: 1 ChangeBit- 00:08:14.818 [2024-10-09 00:15:45.391894] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000007e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.818 [2024-10-09 00:15:45.391919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:14.818 [2024-10-09 00:15:45.391981] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000035 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.818 [2024-10-09 00:15:45.391996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:14.818 [2024-10-09 00:15:45.392057] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000035 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.818 [2024-10-09 00:15:45.392072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:14.818 [2024-10-09 00:15:45.392133] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000035 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.818 [2024-10-09 00:15:45.392147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:14.818 #40 NEW cov: 12427 ft: 15474 corp: 27/641b lim: 35 exec/s: 40 rss: 76Mb L: 28/35 MS: 1 ChangeBinInt- 00:08:15.078 [2024-10-09 00:15:45.452172] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.078 [2024-10-09 00:15:45.452198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:15.078 [2024-10-09 00:15:45.452258] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.078 [2024-10-09 00:15:45.452274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:15.078 [2024-10-09 00:15:45.452339] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:80000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.078 [2024-10-09 00:15:45.452356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:15.078 [2024-10-09 00:15:45.452415] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.078 [2024-10-09 00:15:45.452432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:15.078 #41 NEW cov: 12427 ft: 15506 corp: 28/673b lim: 35 exec/s: 41 rss: 76Mb L: 32/35 MS: 1 PersAutoDict- DE: "\001\000\000\000"- 00:08:15.078 [2024-10-09 00:15:45.492014] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.078 [2024-10-09 00:15:45.492042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:15.078 [2024-10-09 00:15:45.492106] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:8000007f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.078 [2024-10-09 00:15:45.492123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:15.078 [2024-10-09 00:15:45.492183] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:0000001a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.078 [2024-10-09 00:15:45.492197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:15.078 #42 NEW cov: 12427 ft: 15526 corp: 29/699b lim: 35 exec/s: 42 rss: 76Mb L: 26/35 MS: 1 ChangeBit- 00:08:15.078 [2024-10-09 00:15:45.532291] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000e4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.078 [2024-10-09 00:15:45.532319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:15.078 [2024-10-09 00:15:45.532381] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.078 [2024-10-09 00:15:45.532398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:15.078 [2024-10-09 00:15:45.532456] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.078 [2024-10-09 00:15:45.532472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:15.078 [2024-10-09 00:15:45.532530] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.078 [2024-10-09 00:15:45.532544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:15.078 #43 NEW cov: 12427 ft: 15552 corp: 30/729b lim: 35 exec/s: 43 rss: 76Mb L: 30/35 MS: 1 InsertRepeatedBytes- 00:08:15.078 [2024-10-09 00:15:45.572367] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000007e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.078 [2024-10-09 00:15:45.572392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:15.078 [2024-10-09 00:15:45.572453] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000035 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.078 [2024-10-09 00:15:45.572467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:15.078 [2024-10-09 00:15:45.572527] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000035 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.078 [2024-10-09 00:15:45.572548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:15.078 [2024-10-09 00:15:45.572621] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000035 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.078 [2024-10-09 00:15:45.572636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:15.078 #44 NEW cov: 12427 ft: 15569 corp: 31/757b lim: 35 exec/s: 44 rss: 76Mb L: 28/35 MS: 1 ChangeBit- 00:08:15.078 [2024-10-09 00:15:45.612502] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000007e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.078 [2024-10-09 00:15:45.612526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:15.078 [2024-10-09 00:15:45.612589] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000035 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.078 [2024-10-09 00:15:45.612603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:15.078 [2024-10-09 00:15:45.612662] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000035 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.078 [2024-10-09 00:15:45.612676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:15.078 [2024-10-09 00:15:45.612736] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000035 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.078 [2024-10-09 00:15:45.612750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:15.078 #45 NEW cov: 12427 ft: 15591 corp: 32/791b lim: 35 exec/s: 45 rss: 76Mb L: 34/35 MS: 1 CopyPart- 00:08:15.078 [2024-10-09 00:15:45.672487] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000007e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.078 [2024-10-09 00:15:45.672512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:15.078 [2024-10-09 00:15:45.672574] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000035 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.078 [2024-10-09 00:15:45.672589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:15.078 [2024-10-09 00:15:45.672650] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000035 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.078 [2024-10-09 00:15:45.672664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:15.078 #46 NEW cov: 12427 ft: 15607 corp: 33/815b lim: 35 exec/s: 46 rss: 76Mb L: 24/35 MS: 1 ShuffleBytes- 00:08:15.348 [2024-10-09 00:15:45.712832] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000007e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.348 [2024-10-09 00:15:45.712857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:15.348 [2024-10-09 00:15:45.712921] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000035 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.348 [2024-10-09 00:15:45.712935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:15.348 [2024-10-09 00:15:45.712996] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000035 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.349 [2024-10-09 00:15:45.713010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:15.349 [2024-10-09 00:15:45.713073] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000035 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.349 [2024-10-09 00:15:45.713093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:15.349 #47 NEW cov: 12427 ft: 15648 corp: 34/844b lim: 35 exec/s: 47 rss: 76Mb L: 29/35 MS: 1 InsertByte- 00:08:15.349 [2024-10-09 00:15:45.753039] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000007e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.349 [2024-10-09 00:15:45.753064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:15.349 [2024-10-09 00:15:45.753123] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000035 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.349 [2024-10-09 00:15:45.753137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:15.349 [2024-10-09 00:15:45.753197] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000035 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.349 [2024-10-09 00:15:45.753212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:15.349 [2024-10-09 00:15:45.753272] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.349 [2024-10-09 00:15:45.753287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:15.349 [2024-10-09 00:15:45.753347] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:800000f4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.349 [2024-10-09 00:15:45.753364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:15.349 #48 NEW cov: 12427 ft: 15662 corp: 35/879b lim: 35 exec/s: 48 rss: 76Mb L: 35/35 MS: 1 CrossOver- 00:08:15.349 [2024-10-09 00:15:45.812899] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.349 [2024-10-09 00:15:45.812928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:15.349 [2024-10-09 00:15:45.812987] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.349 [2024-10-09 00:15:45.813004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:15.349 [2024-10-09 00:15:45.813064] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:8000008b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.349 [2024-10-09 00:15:45.813082] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:15.349 #49 NEW cov: 12427 ft: 15670 corp: 36/902b lim: 35 exec/s: 49 rss: 76Mb L: 23/35 MS: 1 PersAutoDict- DE: "\001\000\000\000"- 00:08:15.349 [2024-10-09 00:15:45.853186] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000007e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.349 [2024-10-09 00:15:45.853212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:15.350 [2024-10-09 00:15:45.853274] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000035 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.350 [2024-10-09 00:15:45.853288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:15.350 [2024-10-09 00:15:45.853347] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000035 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.350 [2024-10-09 00:15:45.853361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:15.350 [2024-10-09 00:15:45.853442] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000035 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.350 [2024-10-09 00:15:45.853457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:15.350 #50 NEW cov: 12427 ft: 15712 corp: 37/930b lim: 35 exec/s: 25 rss: 77Mb L: 28/35 MS: 1 ChangeBinInt- 00:08:15.350 #50 DONE cov: 12427 ft: 15712 corp: 37/930b lim: 35 exec/s: 25 rss: 77Mb 00:08:15.350 ###### Recommended dictionary. ###### 00:08:15.350 "\001\000\000\000" # Uses: 2 00:08:15.350 ###### End of recommended dictionary. ###### 00:08:15.350 Done 50 runs in 2 second(s) 00:08:15.617 00:15:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_14.conf /var/tmp/suppress_nvmf_fuzz 00:08:15.617 00:15:46 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:15.617 00:15:46 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:15.617 00:15:46 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 15 1 0x1 00:08:15.617 00:15:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=15 00:08:15.617 00:15:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:15.617 00:15:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:15.618 00:15:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:08:15.618 00:15:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_15.conf 00:08:15.618 00:15:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:15.618 00:15:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:15.618 00:15:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 15 00:08:15.618 00:15:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4415 00:08:15.618 00:15:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:08:15.618 00:15:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' 00:08:15.618 00:15:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4415"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:15.618 00:15:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:15.618 00:15:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:15.618 00:15:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' -c /tmp/fuzz_json_15.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 -Z 15 00:08:15.618 [2024-10-09 00:15:46.083578] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:08:15.618 [2024-10-09 00:15:46.083665] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3890641 ] 00:08:15.876 [2024-10-09 00:15:46.283771] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.876 [2024-10-09 00:15:46.356920] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.876 [2024-10-09 00:15:46.415819] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:15.876 [2024-10-09 00:15:46.432055] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4415 *** 00:08:15.876 INFO: Running with entropic power schedule (0xFF, 100). 00:08:15.876 INFO: Seed: 233280437 00:08:15.876 INFO: Loaded 1 modules (384346 inline 8-bit counters): 384346 [0x2bec84c, 0x2c4a5a6), 00:08:15.876 INFO: Loaded 1 PC tables (384346 PCs): 384346 [0x2c4a5a8,0x3227b48), 00:08:15.876 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:08:15.876 INFO: A corpus is not provided, starting from an empty corpus 00:08:15.876 #2 INITED exec/s: 0 rss: 66Mb 00:08:15.876 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:15.876 This may also happen if the target rejected all inputs we tried so far 00:08:15.876 [2024-10-09 00:15:46.500654] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000002e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.876 [2024-10-09 00:15:46.500699] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:15.876 [2024-10-09 00:15:46.500811] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.876 [2024-10-09 00:15:46.500833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:15.876 [2024-10-09 00:15:46.500971] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.876 [2024-10-09 00:15:46.500990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:15.876 [2024-10-09 00:15:46.501097] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.876 [2024-10-09 00:15:46.501116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:16.392 NEW_FUNC[1/714]: 0x451248 in fuzz_admin_get_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:460 00:08:16.392 NEW_FUNC[2/714]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:16.392 #6 NEW cov: 12171 ft: 12172 corp: 2/30b lim: 35 exec/s: 0 rss: 73Mb L: 29/29 MS: 4 CrossOver-ChangeByte-ChangeBit-InsertRepeatedBytes- 00:08:16.392 [2024-10-09 00:15:46.861321] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.392 [2024-10-09 00:15:46.861363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:16.392 [2024-10-09 00:15:46.861474] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.392 [2024-10-09 00:15:46.861493] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:16.392 [2024-10-09 00:15:46.861600] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.392 [2024-10-09 00:15:46.861618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:16.392 NEW_FUNC[1/1]: 0x471258 in feat_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:340 00:08:16.392 #7 NEW cov: 12298 ft: 12974 corp: 3/62b lim: 35 exec/s: 0 rss: 73Mb L: 32/32 MS: 1 InsertRepeatedBytes- 00:08:16.392 [2024-10-09 00:15:46.911554] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.392 [2024-10-09 00:15:46.911584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:16.392 [2024-10-09 00:15:46.911696] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.392 [2024-10-09 00:15:46.911713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:16.392 [2024-10-09 00:15:46.911820] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.392 [2024-10-09 00:15:46.911854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:16.392 #8 NEW cov: 12304 ft: 13091 corp: 4/95b lim: 35 exec/s: 0 rss: 73Mb L: 33/33 MS: 1 InsertByte- 00:08:16.392 [2024-10-09 00:15:46.981863] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.392 [2024-10-09 00:15:46.981891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:16.392 [2024-10-09 00:15:46.981995] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.392 [2024-10-09 00:15:46.982011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:16.392 [2024-10-09 00:15:46.982115] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.392 [2024-10-09 00:15:46.982132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:16.392 #9 NEW cov: 12389 ft: 13279 corp: 5/128b lim: 35 exec/s: 0 rss: 73Mb L: 33/33 MS: 1 ChangeBit- 00:08:16.651 [2024-10-09 00:15:47.052005] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.651 [2024-10-09 00:15:47.052033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:16.651 [2024-10-09 00:15:47.052141] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.651 [2024-10-09 00:15:47.052157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:16.651 [2024-10-09 00:15:47.052253] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.651 [2024-10-09 00:15:47.052270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:16.651 #10 NEW cov: 12389 ft: 13398 corp: 6/161b lim: 35 exec/s: 0 rss: 74Mb L: 33/33 MS: 1 ShuffleBytes- 00:08:16.651 [2024-10-09 00:15:47.122307] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.651 [2024-10-09 00:15:47.122333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:16.651 [2024-10-09 00:15:47.122441] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.651 [2024-10-09 00:15:47.122457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:16.651 [2024-10-09 00:15:47.122568] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.651 [2024-10-09 00:15:47.122583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:16.651 #16 NEW cov: 12389 ft: 13466 corp: 7/194b lim: 35 exec/s: 0 rss: 74Mb L: 33/33 MS: 1 ChangeByte- 00:08:16.651 [2024-10-09 00:15:47.172518] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.651 [2024-10-09 00:15:47.172545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:16.651 [2024-10-09 00:15:47.172644] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000032 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.651 [2024-10-09 00:15:47.172659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:16.651 NEW_FUNC[1/1]: 0x46a6e8 in feat_arbitration /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:273 00:08:16.651 #17 NEW cov: 12427 ft: 13731 corp: 8/228b lim: 35 exec/s: 0 rss: 74Mb L: 34/34 MS: 1 InsertByte- 00:08:16.651 [2024-10-09 00:15:47.242634] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.651 [2024-10-09 00:15:47.242662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:16.651 [2024-10-09 00:15:47.242774] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.651 [2024-10-09 00:15:47.242791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:16.651 [2024-10-09 00:15:47.242898] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.651 [2024-10-09 00:15:47.242916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:16.651 #18 NEW cov: 12427 ft: 13822 corp: 9/258b lim: 35 exec/s: 0 rss: 74Mb L: 30/34 MS: 1 InsertRepeatedBytes- 00:08:16.909 [2024-10-09 00:15:47.293020] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.909 [2024-10-09 00:15:47.293049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:16.909 [2024-10-09 00:15:47.293160] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.909 [2024-10-09 00:15:47.293177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:16.909 [2024-10-09 00:15:47.293276] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.909 [2024-10-09 00:15:47.293292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:16.909 #19 NEW cov: 12427 ft: 13943 corp: 10/291b lim: 35 exec/s: 0 rss: 74Mb L: 33/34 MS: 1 ChangeBit- 00:08:16.909 [2024-10-09 00:15:47.343260] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.909 [2024-10-09 00:15:47.343287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:16.909 [2024-10-09 00:15:47.343389] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.909 [2024-10-09 00:15:47.343405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:16.909 [2024-10-09 00:15:47.343507] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.909 [2024-10-09 00:15:47.343525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:16.909 NEW_FUNC[1/1]: 0x1bfc8d8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:658 00:08:16.909 #20 NEW cov: 12450 ft: 14005 corp: 11/321b lim: 35 exec/s: 0 rss: 74Mb L: 30/34 MS: 1 ChangeByte- 00:08:16.909 [2024-10-09 00:15:47.422869] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.909 [2024-10-09 00:15:47.422898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:16.909 #21 NEW cov: 12450 ft: 14513 corp: 12/337b lim: 35 exec/s: 0 rss: 74Mb L: 16/34 MS: 1 EraseBytes- 00:08:16.909 [2024-10-09 00:15:47.483828] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.909 [2024-10-09 00:15:47.483862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:16.909 [2024-10-09 00:15:47.483980] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.909 [2024-10-09 00:15:47.484000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:16.909 [2024-10-09 00:15:47.484110] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.909 [2024-10-09 00:15:47.484129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:16.909 #22 NEW cov: 12450 ft: 14561 corp: 13/370b lim: 35 exec/s: 22 rss: 74Mb L: 33/34 MS: 1 ChangeBit- 00:08:17.167 [2024-10-09 00:15:47.564165] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.167 [2024-10-09 00:15:47.564196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:17.167 [2024-10-09 00:15:47.564303] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.167 [2024-10-09 00:15:47.564320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:17.167 [2024-10-09 00:15:47.564432] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.167 [2024-10-09 00:15:47.564449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:17.167 #23 NEW cov: 12450 ft: 14643 corp: 14/400b lim: 35 exec/s: 23 rss: 74Mb L: 30/34 MS: 1 ChangeBit- 00:08:17.167 [2024-10-09 00:15:47.634248] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000002e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.167 [2024-10-09 00:15:47.634275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:17.167 [2024-10-09 00:15:47.634369] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.167 [2024-10-09 00:15:47.634385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:17.167 [2024-10-09 00:15:47.634489] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.167 [2024-10-09 00:15:47.634504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:17.167 [2024-10-09 00:15:47.634604] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.167 [2024-10-09 00:15:47.634620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:17.167 #24 NEW cov: 12450 ft: 14686 corp: 15/429b lim: 35 exec/s: 24 rss: 75Mb L: 29/34 MS: 1 ShuffleBytes- 00:08:17.167 [2024-10-09 00:15:47.704698] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.167 [2024-10-09 00:15:47.704725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:17.167 [2024-10-09 00:15:47.704831] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.167 [2024-10-09 00:15:47.704848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:17.167 [2024-10-09 00:15:47.704956] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.167 [2024-10-09 00:15:47.704976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:17.167 #25 NEW cov: 12450 ft: 14706 corp: 16/458b lim: 35 exec/s: 25 rss: 75Mb L: 29/34 MS: 1 EraseBytes- 00:08:17.167 [2024-10-09 00:15:47.754855] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.167 [2024-10-09 00:15:47.754880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:17.167 [2024-10-09 00:15:47.754998] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.167 [2024-10-09 00:15:47.755016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:17.167 [2024-10-09 00:15:47.755112] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.167 [2024-10-09 00:15:47.755129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:17.167 #26 NEW cov: 12450 ft: 14733 corp: 17/491b lim: 35 exec/s: 26 rss: 75Mb L: 33/34 MS: 1 ShuffleBytes- 00:08:17.426 [2024-10-09 00:15:47.825418] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.426 [2024-10-09 00:15:47.825444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:17.426 [2024-10-09 00:15:47.825552] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.426 [2024-10-09 00:15:47.825570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:17.426 [2024-10-09 00:15:47.825675] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000003ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.426 [2024-10-09 00:15:47.825693] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:17.426 #27 NEW cov: 12450 ft: 14831 corp: 18/521b lim: 35 exec/s: 27 rss: 75Mb L: 30/34 MS: 1 ChangeBit- 00:08:17.426 [2024-10-09 00:15:47.895532] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.426 [2024-10-09 00:15:47.895558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:17.426 [2024-10-09 00:15:47.895656] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.426 [2024-10-09 00:15:47.895674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:17.426 [2024-10-09 00:15:47.895774] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.426 [2024-10-09 00:15:47.895790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:17.426 #28 NEW cov: 12450 ft: 14838 corp: 19/551b lim: 35 exec/s: 28 rss: 75Mb L: 30/34 MS: 1 ChangeBit- 00:08:17.426 [2024-10-09 00:15:47.945638] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000029 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.426 [2024-10-09 00:15:47.945665] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:17.426 [2024-10-09 00:15:47.945767] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.426 [2024-10-09 00:15:47.945784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:17.426 [2024-10-09 00:15:47.945898] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.426 [2024-10-09 00:15:47.945914] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:17.426 [2024-10-09 00:15:47.946009] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.426 [2024-10-09 00:15:47.946025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:17.426 #29 NEW cov: 12450 ft: 14847 corp: 20/582b lim: 35 exec/s: 29 rss: 75Mb L: 31/34 MS: 1 InsertByte- 00:08:17.426 [2024-10-09 00:15:47.996006] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.426 [2024-10-09 00:15:47.996032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:17.426 [2024-10-09 00:15:47.996136] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.426 [2024-10-09 00:15:47.996154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:17.426 [2024-10-09 00:15:47.996261] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000003ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.426 [2024-10-09 00:15:47.996277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:17.426 #30 NEW cov: 12450 ft: 14913 corp: 21/615b lim: 35 exec/s: 30 rss: 75Mb L: 33/34 MS: 1 InsertRepeatedBytes- 00:08:17.684 [2024-10-09 00:15:48.066474] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.684 [2024-10-09 00:15:48.066502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:17.684 [2024-10-09 00:15:48.066610] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.684 [2024-10-09 00:15:48.066627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:17.684 [2024-10-09 00:15:48.066719] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.684 [2024-10-09 00:15:48.066737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:17.684 #31 NEW cov: 12450 ft: 14984 corp: 22/644b lim: 35 exec/s: 31 rss: 75Mb L: 29/34 MS: 1 CMP- DE: "\377\377\377\377\377\377\377\377"- 00:08:17.684 [2024-10-09 00:15:48.136435] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000002e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.684 [2024-10-09 00:15:48.136461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:17.684 [2024-10-09 00:15:48.136568] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.684 [2024-10-09 00:15:48.136584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:17.684 [2024-10-09 00:15:48.136675] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.684 [2024-10-09 00:15:48.136690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:17.684 [2024-10-09 00:15:48.136784] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.684 [2024-10-09 00:15:48.136804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:17.684 #32 NEW cov: 12450 ft: 14999 corp: 23/673b lim: 35 exec/s: 32 rss: 75Mb L: 29/34 MS: 1 PersAutoDict- DE: "\377\377\377\377\377\377\377\377"- 00:08:17.684 [2024-10-09 00:15:48.206790] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000002e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.684 [2024-10-09 00:15:48.206822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:17.684 [2024-10-09 00:15:48.206926] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000003ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.684 [2024-10-09 00:15:48.206942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:17.684 [2024-10-09 00:15:48.207036] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.684 [2024-10-09 00:15:48.207052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:17.684 [2024-10-09 00:15:48.207146] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.684 [2024-10-09 00:15:48.207164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:17.684 #33 NEW cov: 12450 ft: 15021 corp: 24/702b lim: 35 exec/s: 33 rss: 75Mb L: 29/34 MS: 1 ChangeBit- 00:08:17.684 [2024-10-09 00:15:48.257500] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.684 [2024-10-09 00:15:48.257525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:17.684 [2024-10-09 00:15:48.257635] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.684 [2024-10-09 00:15:48.257651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:17.684 [2024-10-09 00:15:48.257740] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.684 [2024-10-09 00:15:48.257758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:17.684 [2024-10-09 00:15:48.257871] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.684 [2024-10-09 00:15:48.257892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:17.684 #34 NEW cov: 12450 ft: 15080 corp: 25/737b lim: 35 exec/s: 34 rss: 75Mb L: 35/35 MS: 1 CMP- DE: "\007\000"- 00:08:17.684 [2024-10-09 00:15:48.307279] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.684 [2024-10-09 00:15:48.307306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:17.684 [2024-10-09 00:15:48.307412] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.684 [2024-10-09 00:15:48.307428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:17.684 [2024-10-09 00:15:48.307526] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.685 [2024-10-09 00:15:48.307543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:17.959 #35 NEW cov: 12450 ft: 15112 corp: 26/770b lim: 35 exec/s: 35 rss: 75Mb L: 33/35 MS: 1 ChangeBinInt- 00:08:17.959 [2024-10-09 00:15:48.357190] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000029 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.959 [2024-10-09 00:15:48.357216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:17.959 [2024-10-09 00:15:48.357321] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.959 [2024-10-09 00:15:48.357337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:17.959 [2024-10-09 00:15:48.357439] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.959 [2024-10-09 00:15:48.357458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:17.959 [2024-10-09 00:15:48.357563] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.959 [2024-10-09 00:15:48.357581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:17.960 #36 NEW cov: 12450 ft: 15127 corp: 27/801b lim: 35 exec/s: 36 rss: 75Mb L: 31/35 MS: 1 ChangeByte- 00:08:17.960 [2024-10-09 00:15:48.426926] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.960 [2024-10-09 00:15:48.426952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:17.960 #37 NEW cov: 12450 ft: 15140 corp: 28/821b lim: 35 exec/s: 37 rss: 75Mb L: 20/35 MS: 1 InsertRepeatedBytes- 00:08:17.960 [2024-10-09 00:15:48.497966] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.960 [2024-10-09 00:15:48.497992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:17.960 [2024-10-09 00:15:48.498098] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.960 [2024-10-09 00:15:48.498116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:17.960 [2024-10-09 00:15:48.498224] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.960 [2024-10-09 00:15:48.498242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:17.960 #38 NEW cov: 12450 ft: 15152 corp: 29/850b lim: 35 exec/s: 19 rss: 75Mb L: 29/35 MS: 1 ChangeByte- 00:08:17.960 #38 DONE cov: 12450 ft: 15152 corp: 29/850b lim: 35 exec/s: 19 rss: 75Mb 00:08:17.960 ###### Recommended dictionary. ###### 00:08:17.960 "\377\377\377\377\377\377\377\377" # Uses: 1 00:08:17.960 "\007\000" # Uses: 0 00:08:17.960 ###### End of recommended dictionary. ###### 00:08:17.960 Done 38 runs in 2 second(s) 00:08:18.224 00:15:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_15.conf /var/tmp/suppress_nvmf_fuzz 00:08:18.224 00:15:48 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:18.224 00:15:48 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:18.224 00:15:48 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 16 1 0x1 00:08:18.224 00:15:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=16 00:08:18.224 00:15:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:18.224 00:15:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:18.224 00:15:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:08:18.224 00:15:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_16.conf 00:08:18.224 00:15:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:18.224 00:15:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:18.224 00:15:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 16 00:08:18.224 00:15:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4416 00:08:18.224 00:15:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:08:18.224 00:15:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' 00:08:18.224 00:15:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4416"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:18.224 00:15:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:18.224 00:15:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:18.224 00:15:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' -c /tmp/fuzz_json_16.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 -Z 16 00:08:18.224 [2024-10-09 00:15:48.696689] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:08:18.224 [2024-10-09 00:15:48.696751] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3890971 ] 00:08:18.482 [2024-10-09 00:15:48.898864] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.483 [2024-10-09 00:15:48.973070] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.483 [2024-10-09 00:15:49.032154] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:18.483 [2024-10-09 00:15:49.048385] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4416 *** 00:08:18.483 INFO: Running with entropic power schedule (0xFF, 100). 00:08:18.483 INFO: Seed: 2847277238 00:08:18.483 INFO: Loaded 1 modules (384346 inline 8-bit counters): 384346 [0x2bec84c, 0x2c4a5a6), 00:08:18.483 INFO: Loaded 1 PC tables (384346 PCs): 384346 [0x2c4a5a8,0x3227b48), 00:08:18.483 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:08:18.483 INFO: A corpus is not provided, starting from an empty corpus 00:08:18.483 #2 INITED exec/s: 0 rss: 66Mb 00:08:18.483 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:18.483 This may also happen if the target rejected all inputs we tried so far 00:08:18.483 [2024-10-09 00:15:49.097545] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18085043206516046586 len:64251 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.483 [2024-10-09 00:15:49.097575] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.483 [2024-10-09 00:15:49.097616] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18085043209519168250 len:64251 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.483 [2024-10-09 00:15:49.097633] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:18.483 [2024-10-09 00:15:49.097685] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18085043209519168250 len:64251 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.483 [2024-10-09 00:15:49.097701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:18.483 [2024-10-09 00:15:49.097753] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18085043209519168250 len:64251 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.483 [2024-10-09 00:15:49.097771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:18.998 NEW_FUNC[1/715]: 0x452708 in fuzz_nvm_read_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:519 00:08:18.999 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:18.999 #5 NEW cov: 12273 ft: 12273 corp: 2/92b lim: 105 exec/s: 0 rss: 73Mb L: 91/91 MS: 3 ChangeBit-InsertByte-InsertRepeatedBytes- 00:08:18.999 [2024-10-09 00:15:49.418572] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16999940613173144555 len:60396 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.999 [2024-10-09 00:15:49.418619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.999 [2024-10-09 00:15:49.418687] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:16999940616948018155 len:60396 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.999 [2024-10-09 00:15:49.418709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:18.999 [2024-10-09 00:15:49.418770] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:16999940616948018155 len:60396 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.999 [2024-10-09 00:15:49.418792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:18.999 [2024-10-09 00:15:49.418862] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:16999940616948018155 len:60396 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.999 [2024-10-09 00:15:49.418884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:18.999 #6 NEW cov: 12388 ft: 12961 corp: 3/194b lim: 105 exec/s: 0 rss: 73Mb L: 102/102 MS: 1 InsertRepeatedBytes- 00:08:18.999 [2024-10-09 00:15:49.458495] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16999940613173144555 len:60396 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.999 [2024-10-09 00:15:49.458523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.999 [2024-10-09 00:15:49.458568] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:16999940616948018155 len:60396 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.999 [2024-10-09 00:15:49.458584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:18.999 [2024-10-09 00:15:49.458640] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:16999940616939039723 len:60396 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.999 [2024-10-09 00:15:49.458656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:18.999 [2024-10-09 00:15:49.458712] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:16999940616948018155 len:60396 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.999 [2024-10-09 00:15:49.458728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:18.999 #7 NEW cov: 12394 ft: 13118 corp: 4/297b lim: 105 exec/s: 0 rss: 73Mb L: 103/103 MS: 1 InsertByte- 00:08:18.999 [2024-10-09 00:15:49.518550] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18085043206516046586 len:64251 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.999 [2024-10-09 00:15:49.518580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.999 [2024-10-09 00:15:49.518630] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18085043209519168250 len:64251 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.999 [2024-10-09 00:15:49.518650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:18.999 [2024-10-09 00:15:49.518706] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18085043209519168250 len:64251 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.999 [2024-10-09 00:15:49.518724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:18.999 #8 NEW cov: 12479 ft: 13820 corp: 5/375b lim: 105 exec/s: 0 rss: 74Mb L: 78/103 MS: 1 EraseBytes- 00:08:18.999 [2024-10-09 00:15:49.578672] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18085043206516046586 len:64251 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.999 [2024-10-09 00:15:49.578699] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.999 [2024-10-09 00:15:49.578748] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18445331179709071359 len:64251 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.999 [2024-10-09 00:15:49.578764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:18.999 [2024-10-09 00:15:49.578822] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18085043209519168250 len:64251 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.999 [2024-10-09 00:15:49.578839] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:18.999 #9 NEW cov: 12479 ft: 13989 corp: 6/453b lim: 105 exec/s: 0 rss: 74Mb L: 78/103 MS: 1 CMP- DE: "\376\377\377\377"- 00:08:19.257 [2024-10-09 00:15:49.638875] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18085043206516046586 len:64251 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.257 [2024-10-09 00:15:49.638904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.257 [2024-10-09 00:15:49.638940] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18445331179709071359 len:64251 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.257 [2024-10-09 00:15:49.638956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.257 [2024-10-09 00:15:49.639013] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18085043209519168250 len:64251 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.257 [2024-10-09 00:15:49.639029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:19.257 #10 NEW cov: 12479 ft: 14164 corp: 7/531b lim: 105 exec/s: 0 rss: 74Mb L: 78/103 MS: 1 ChangeBit- 00:08:19.257 [2024-10-09 00:15:49.699171] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18085043206516046586 len:64251 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.257 [2024-10-09 00:15:49.699200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.257 [2024-10-09 00:15:49.699263] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18085043209519168250 len:64251 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.257 [2024-10-09 00:15:49.699279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.257 [2024-10-09 00:15:49.699333] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18085043209519168250 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.257 [2024-10-09 00:15:49.699349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:19.257 [2024-10-09 00:15:49.699404] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18085043209519168250 len:64251 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.257 [2024-10-09 00:15:49.699423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:19.257 #11 NEW cov: 12479 ft: 14209 corp: 8/626b lim: 105 exec/s: 0 rss: 74Mb L: 95/103 MS: 1 PersAutoDict- DE: "\376\377\377\377"- 00:08:19.257 [2024-10-09 00:15:49.739141] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18085043206516046586 len:64251 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.257 [2024-10-09 00:15:49.739168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.258 [2024-10-09 00:15:49.739213] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18445331179709071359 len:64251 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.258 [2024-10-09 00:15:49.739228] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.258 [2024-10-09 00:15:49.739284] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18085043209519168250 len:64251 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.258 [2024-10-09 00:15:49.739300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:19.258 #12 NEW cov: 12479 ft: 14256 corp: 9/704b lim: 105 exec/s: 0 rss: 74Mb L: 78/103 MS: 1 ChangeBit- 00:08:19.258 [2024-10-09 00:15:49.799428] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18085043206516046586 len:64251 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.258 [2024-10-09 00:15:49.799456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.258 [2024-10-09 00:15:49.799511] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18085043209519168250 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.258 [2024-10-09 00:15:49.799527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.258 [2024-10-09 00:15:49.799582] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18085043209519168250 len:64251 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.258 [2024-10-09 00:15:49.799597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:19.258 [2024-10-09 00:15:49.799653] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18085043209519168250 len:64251 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.258 [2024-10-09 00:15:49.799670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:19.258 #13 NEW cov: 12479 ft: 14291 corp: 10/789b lim: 105 exec/s: 0 rss: 74Mb L: 85/103 MS: 1 CrossOver- 00:08:19.258 [2024-10-09 00:15:49.839407] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18085043206516046586 len:64251 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.258 [2024-10-09 00:15:49.839433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.258 [2024-10-09 00:15:49.839470] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18445331179709071359 len:64251 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.258 [2024-10-09 00:15:49.839487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.258 [2024-10-09 00:15:49.839542] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18085043209519168250 len:64251 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.258 [2024-10-09 00:15:49.839558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:19.258 #14 NEW cov: 12479 ft: 14325 corp: 11/867b lim: 105 exec/s: 0 rss: 74Mb L: 78/103 MS: 1 ShuffleBytes- 00:08:19.516 [2024-10-09 00:15:49.899736] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18085043206516046586 len:64251 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.516 [2024-10-09 00:15:49.899764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.516 [2024-10-09 00:15:49.899806] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18085043209519168250 len:64251 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.516 [2024-10-09 00:15:49.899827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.516 [2024-10-09 00:15:49.899885] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18085043209519168250 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.516 [2024-10-09 00:15:49.899902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:19.516 [2024-10-09 00:15:49.899955] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18085043209519168250 len:64251 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.516 [2024-10-09 00:15:49.899971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:19.516 #15 NEW cov: 12479 ft: 14375 corp: 12/962b lim: 105 exec/s: 0 rss: 74Mb L: 95/103 MS: 1 ChangeBinInt- 00:08:19.516 [2024-10-09 00:15:49.959898] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18085043206516046586 len:64251 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.516 [2024-10-09 00:15:49.959927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.516 [2024-10-09 00:15:49.959975] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18085043209519168250 len:64251 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.516 [2024-10-09 00:15:49.959991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.516 [2024-10-09 00:15:49.960046] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18085043209519168250 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.516 [2024-10-09 00:15:49.960062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:19.516 [2024-10-09 00:15:49.960118] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18085043209519168250 len:64251 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.516 [2024-10-09 00:15:49.960151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:19.516 NEW_FUNC[1/1]: 0x1bfc8d8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:658 00:08:19.516 #16 NEW cov: 12502 ft: 14478 corp: 13/1057b lim: 105 exec/s: 0 rss: 74Mb L: 95/103 MS: 1 ChangeBinInt- 00:08:19.516 [2024-10-09 00:15:50.000014] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18085043206516046586 len:64251 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.516 [2024-10-09 00:15:50.000043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.516 [2024-10-09 00:15:50.000091] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18085043209519168250 len:64251 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.516 [2024-10-09 00:15:50.000107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.516 [2024-10-09 00:15:50.000162] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18085043209519168250 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.516 [2024-10-09 00:15:50.000179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:19.516 [2024-10-09 00:15:50.000238] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18085043166569495290 len:64251 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.516 [2024-10-09 00:15:50.000254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:19.516 #17 NEW cov: 12502 ft: 14500 corp: 14/1152b lim: 105 exec/s: 0 rss: 74Mb L: 95/103 MS: 1 ChangeBinInt- 00:08:19.516 [2024-10-09 00:15:50.040045] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:5186733876797505535 len:64251 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.516 [2024-10-09 00:15:50.040078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.516 [2024-10-09 00:15:50.040115] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18086174628458986234 len:64251 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.516 [2024-10-09 00:15:50.040133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.516 [2024-10-09 00:15:50.040190] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18085043209519168250 len:64251 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.516 [2024-10-09 00:15:50.040207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:19.516 #28 NEW cov: 12502 ft: 14524 corp: 15/1234b lim: 105 exec/s: 0 rss: 74Mb L: 82/103 MS: 1 PersAutoDict- DE: "\376\377\377\377"- 00:08:19.516 [2024-10-09 00:15:50.080243] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18085043206516046586 len:64251 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.516 [2024-10-09 00:15:50.080275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.516 [2024-10-09 00:15:50.080315] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18085043209519168250 len:64251 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.516 [2024-10-09 00:15:50.080332] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.516 [2024-10-09 00:15:50.080386] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18085043209519168250 len:64251 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.516 [2024-10-09 00:15:50.080403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:19.516 [2024-10-09 00:15:50.080457] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18085043209519168250 len:64251 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.516 [2024-10-09 00:15:50.080474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:19.516 #29 NEW cov: 12502 ft: 14598 corp: 16/1325b lim: 105 exec/s: 29 rss: 74Mb L: 91/103 MS: 1 ShuffleBytes- 00:08:19.516 [2024-10-09 00:15:50.120191] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18085043206516046586 len:64251 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.516 [2024-10-09 00:15:50.120221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.516 [2024-10-09 00:15:50.120260] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18085043209519168250 len:64251 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.516 [2024-10-09 00:15:50.120277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.516 [2024-10-09 00:15:50.120334] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18085043209519168250 len:65280 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.516 [2024-10-09 00:15:50.120354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:19.516 #30 NEW cov: 12502 ft: 14612 corp: 17/1403b lim: 105 exec/s: 30 rss: 74Mb L: 78/103 MS: 1 PersAutoDict- DE: "\376\377\377\377"- 00:08:19.774 [2024-10-09 00:15:50.160469] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18085043206516046586 len:64251 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.774 [2024-10-09 00:15:50.160497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.774 [2024-10-09 00:15:50.160542] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18085043209519168250 len:251 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.774 [2024-10-09 00:15:50.160557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.774 [2024-10-09 00:15:50.160613] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18085043209519168250 len:65280 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.774 [2024-10-09 00:15:50.160629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:19.774 [2024-10-09 00:15:50.160686] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18085043209519168250 len:64251 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.774 [2024-10-09 00:15:50.160703] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:19.774 #31 NEW cov: 12502 ft: 14634 corp: 18/1499b lim: 105 exec/s: 31 rss: 74Mb L: 96/103 MS: 1 InsertByte- 00:08:19.774 [2024-10-09 00:15:50.220645] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16999940613173144555 len:60396 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.774 [2024-10-09 00:15:50.220673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.774 [2024-10-09 00:15:50.220728] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:16999940616948018155 len:60396 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.774 [2024-10-09 00:15:50.220744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.774 [2024-10-09 00:15:50.220801] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:16999940616939039723 len:60396 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.774 [2024-10-09 00:15:50.220822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:19.774 [2024-10-09 00:15:50.220880] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:16999940616948018155 len:60396 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.774 [2024-10-09 00:15:50.220896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:19.774 #32 NEW cov: 12502 ft: 14640 corp: 19/1602b lim: 105 exec/s: 32 rss: 74Mb L: 103/103 MS: 1 ShuffleBytes- 00:08:19.774 [2024-10-09 00:15:50.280683] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18085043206516046586 len:64251 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.774 [2024-10-09 00:15:50.280712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.774 [2024-10-09 00:15:50.280757] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18445331179709071359 len:64251 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.774 [2024-10-09 00:15:50.280774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.774 [2024-10-09 00:15:50.280832] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18085043205878512378 len:64251 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.774 [2024-10-09 00:15:50.280852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:19.774 #33 NEW cov: 12502 ft: 14662 corp: 20/1681b lim: 105 exec/s: 33 rss: 74Mb L: 79/103 MS: 1 InsertByte- 00:08:19.774 [2024-10-09 00:15:50.320942] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18085043206516046586 len:64251 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.774 [2024-10-09 00:15:50.320969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.774 [2024-10-09 00:15:50.321024] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18085043209519168250 len:64251 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.774 [2024-10-09 00:15:50.321041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.774 [2024-10-09 00:15:50.321097] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18085043209519168250 len:64251 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.774 [2024-10-09 00:15:50.321114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:19.774 [2024-10-09 00:15:50.321172] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18085043209519168250 len:64251 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.774 [2024-10-09 00:15:50.321188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:19.774 #34 NEW cov: 12502 ft: 14709 corp: 21/1772b lim: 105 exec/s: 34 rss: 74Mb L: 91/103 MS: 1 CopyPart- 00:08:19.774 [2024-10-09 00:15:50.361050] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18085042265918208762 len:64251 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.774 [2024-10-09 00:15:50.361078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.774 [2024-10-09 00:15:50.361135] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18085043209519168250 len:64251 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.774 [2024-10-09 00:15:50.361152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.774 [2024-10-09 00:15:50.361207] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18085043209519168250 len:65280 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.774 [2024-10-09 00:15:50.361224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:19.774 [2024-10-09 00:15:50.361280] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18085043209519168250 len:64251 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.774 [2024-10-09 00:15:50.361297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:19.774 #35 NEW cov: 12502 ft: 14714 corp: 22/1868b lim: 105 exec/s: 35 rss: 74Mb L: 96/103 MS: 1 InsertByte- 00:08:20.047 [2024-10-09 00:15:50.421216] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18085042266052426490 len:64251 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:20.048 [2024-10-09 00:15:50.421243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.048 [2024-10-09 00:15:50.421300] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18085043209519168250 len:64251 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:20.048 [2024-10-09 00:15:50.421316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:20.048 [2024-10-09 00:15:50.421374] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18085043209519168250 len:65280 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:20.048 [2024-10-09 00:15:50.421392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:20.048 [2024-10-09 00:15:50.421446] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18085043209519168250 len:64251 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:20.048 [2024-10-09 00:15:50.421462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:20.048 #36 NEW cov: 12502 ft: 14729 corp: 23/1964b lim: 105 exec/s: 36 rss: 74Mb L: 96/103 MS: 1 ChangeBit- 00:08:20.048 [2024-10-09 00:15:50.481401] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16999940613173144555 len:60396 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:20.048 [2024-10-09 00:15:50.481428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.048 [2024-10-09 00:15:50.481480] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:16999940616948018155 len:60396 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:20.048 [2024-10-09 00:15:50.481497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:20.048 [2024-10-09 00:15:50.481552] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:16999940616948018155 len:60396 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:20.048 [2024-10-09 00:15:50.481568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:20.048 [2024-10-09 00:15:50.481624] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:16999940616948018155 len:60396 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:20.048 [2024-10-09 00:15:50.481641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:20.048 #37 NEW cov: 12502 ft: 14742 corp: 24/2066b lim: 105 exec/s: 37 rss: 74Mb L: 102/103 MS: 1 ChangeBinInt- 00:08:20.048 [2024-10-09 00:15:50.521196] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16999940613493943275 len:60396 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:20.048 [2024-10-09 00:15:50.521224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.048 [2024-10-09 00:15:50.521261] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:16999940616948018155 len:60396 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:20.049 [2024-10-09 00:15:50.521277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:20.049 #39 NEW cov: 12502 ft: 15133 corp: 25/2128b lim: 105 exec/s: 39 rss: 75Mb L: 62/103 MS: 2 ChangeByte-CrossOver- 00:08:20.049 [2024-10-09 00:15:50.561475] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18085043206516046586 len:64251 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:20.049 [2024-10-09 00:15:50.561504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.049 [2024-10-09 00:15:50.561548] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18085043209519168250 len:64251 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:20.049 [2024-10-09 00:15:50.561564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:20.049 [2024-10-09 00:15:50.561618] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18085043209519168250 len:65280 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:20.049 [2024-10-09 00:15:50.561634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:20.049 #40 NEW cov: 12502 ft: 15160 corp: 26/2206b lim: 105 exec/s: 40 rss: 75Mb L: 78/103 MS: 1 ShuffleBytes- 00:08:20.049 [2024-10-09 00:15:50.621746] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16999940613173144555 len:60396 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:20.049 [2024-10-09 00:15:50.621774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.049 [2024-10-09 00:15:50.621830] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:16999940616948018155 len:60396 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:20.049 [2024-10-09 00:15:50.621848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:20.049 [2024-10-09 00:15:50.621903] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:16999940616939039723 len:60396 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:20.049 [2024-10-09 00:15:50.621920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:20.049 [2024-10-09 00:15:50.621979] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:16999940616948018155 len:60396 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:20.049 [2024-10-09 00:15:50.621995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:20.049 #41 NEW cov: 12502 ft: 15179 corp: 27/2300b lim: 105 exec/s: 41 rss: 75Mb L: 94/103 MS: 1 EraseBytes- 00:08:20.312 [2024-10-09 00:15:50.681832] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18085043206516046586 len:64251 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:20.312 [2024-10-09 00:15:50.681860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.312 [2024-10-09 00:15:50.681913] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18085043209519168250 len:64251 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:20.312 [2024-10-09 00:15:50.681929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:20.312 [2024-10-09 00:15:50.681985] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18085043209519168250 len:65280 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:20.312 [2024-10-09 00:15:50.682002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:20.312 #42 NEW cov: 12502 ft: 15187 corp: 28/2378b lim: 105 exec/s: 42 rss: 75Mb L: 78/103 MS: 1 ChangeByte- 00:08:20.312 [2024-10-09 00:15:50.742159] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18085043206516046586 len:64251 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:20.312 [2024-10-09 00:15:50.742188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.312 [2024-10-09 00:15:50.742252] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18085043209519168250 len:64251 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:20.312 [2024-10-09 00:15:50.742269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:20.312 [2024-10-09 00:15:50.742325] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18085043209519168250 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:20.312 [2024-10-09 00:15:50.742340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:20.312 [2024-10-09 00:15:50.742397] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18085043209519168250 len:64251 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:20.312 [2024-10-09 00:15:50.742416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:20.312 #43 NEW cov: 12502 ft: 15193 corp: 29/2477b lim: 105 exec/s: 43 rss: 75Mb L: 99/103 MS: 1 InsertRepeatedBytes- 00:08:20.312 [2024-10-09 00:15:50.782218] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18085043206516046586 len:64251 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:20.312 [2024-10-09 00:15:50.782247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.312 [2024-10-09 00:15:50.782297] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18085043207472347898 len:64251 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:20.312 [2024-10-09 00:15:50.782312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:20.312 [2024-10-09 00:15:50.782366] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18085043209519168250 len:64251 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:20.312 [2024-10-09 00:15:50.782383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:20.312 [2024-10-09 00:15:50.782439] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18085043209519168250 len:64251 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:20.312 [2024-10-09 00:15:50.782456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:20.312 #44 NEW cov: 12502 ft: 15232 corp: 30/2580b lim: 105 exec/s: 44 rss: 75Mb L: 103/103 MS: 1 InsertRepeatedBytes- 00:08:20.312 [2024-10-09 00:15:50.842405] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16947867742481673195 len:60396 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:20.312 [2024-10-09 00:15:50.842432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.312 [2024-10-09 00:15:50.842488] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:16999940616948018155 len:60396 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:20.313 [2024-10-09 00:15:50.842503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:20.313 [2024-10-09 00:15:50.842557] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:16999940616939039723 len:60396 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:20.313 [2024-10-09 00:15:50.842573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:20.313 [2024-10-09 00:15:50.842629] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:16999940616948018155 len:60396 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:20.313 [2024-10-09 00:15:50.842646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:20.313 #45 NEW cov: 12502 ft: 15253 corp: 31/2683b lim: 105 exec/s: 45 rss: 75Mb L: 103/103 MS: 1 ChangeByte- 00:08:20.313 [2024-10-09 00:15:50.882375] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18085043206516046586 len:64251 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:20.313 [2024-10-09 00:15:50.882402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.313 [2024-10-09 00:15:50.882441] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18085043209519168250 len:64251 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:20.313 [2024-10-09 00:15:50.882457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:20.313 [2024-10-09 00:15:50.882509] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18085043209519168250 len:64255 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:20.313 [2024-10-09 00:15:50.882528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:20.313 [2024-10-09 00:15:50.942580] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18085043206516046586 len:64251 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:20.313 [2024-10-09 00:15:50.942607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.313 [2024-10-09 00:15:50.942649] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18085043209519168250 len:64251 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:20.313 [2024-10-09 00:15:50.942665] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:20.313 [2024-10-09 00:15:50.942719] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18085043209519168250 len:64255 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:20.313 [2024-10-09 00:15:50.942736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:20.572 #47 NEW cov: 12502 ft: 15268 corp: 32/2762b lim: 105 exec/s: 47 rss: 75Mb L: 79/103 MS: 2 InsertByte-ChangeBinInt- 00:08:20.572 [2024-10-09 00:15:50.982791] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16999940613173144555 len:60396 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:20.572 [2024-10-09 00:15:50.982822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.572 [2024-10-09 00:15:50.982896] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:16999940616948018155 len:60396 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:20.572 [2024-10-09 00:15:50.982913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:20.572 [2024-10-09 00:15:50.982969] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:16999940616948018155 len:60396 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:20.572 [2024-10-09 00:15:50.982985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:20.572 [2024-10-09 00:15:50.983042] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:16999940616948018155 len:5141 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:20.572 [2024-10-09 00:15:50.983058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:20.572 #48 NEW cov: 12502 ft: 15302 corp: 33/2864b lim: 105 exec/s: 48 rss: 75Mb L: 102/103 MS: 1 ChangeBinInt- 00:08:20.572 [2024-10-09 00:15:51.042749] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16999940613493943275 len:60396 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:20.572 [2024-10-09 00:15:51.042776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.572 [2024-10-09 00:15:51.042821] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:16999940616948018155 len:65280 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:20.572 [2024-10-09 00:15:51.042838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:20.572 #49 NEW cov: 12502 ft: 15326 corp: 34/2926b lim: 105 exec/s: 24 rss: 75Mb L: 62/103 MS: 1 PersAutoDict- DE: "\376\377\377\377"- 00:08:20.572 #49 DONE cov: 12502 ft: 15326 corp: 34/2926b lim: 105 exec/s: 24 rss: 75Mb 00:08:20.572 ###### Recommended dictionary. ###### 00:08:20.572 "\376\377\377\377" # Uses: 5 00:08:20.572 ###### End of recommended dictionary. ###### 00:08:20.572 Done 49 runs in 2 second(s) 00:08:20.830 00:15:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_16.conf /var/tmp/suppress_nvmf_fuzz 00:08:20.830 00:15:51 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:20.830 00:15:51 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:20.830 00:15:51 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 17 1 0x1 00:08:20.831 00:15:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=17 00:08:20.831 00:15:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:20.831 00:15:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:20.831 00:15:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:08:20.831 00:15:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_17.conf 00:08:20.831 00:15:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:20.831 00:15:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:20.831 00:15:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 17 00:08:20.831 00:15:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4417 00:08:20.831 00:15:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:08:20.831 00:15:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' 00:08:20.831 00:15:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4417"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:20.831 00:15:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:20.831 00:15:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:20.831 00:15:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' -c /tmp/fuzz_json_17.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 -Z 17 00:08:20.831 [2024-10-09 00:15:51.271867] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:08:20.831 [2024-10-09 00:15:51.271935] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3891270 ] 00:08:21.089 [2024-10-09 00:15:51.480879] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.089 [2024-10-09 00:15:51.555567] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.089 [2024-10-09 00:15:51.614780] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:21.089 [2024-10-09 00:15:51.631052] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4417 *** 00:08:21.089 INFO: Running with entropic power schedule (0xFF, 100). 00:08:21.089 INFO: Seed: 1136311322 00:08:21.089 INFO: Loaded 1 modules (384346 inline 8-bit counters): 384346 [0x2bec84c, 0x2c4a5a6), 00:08:21.089 INFO: Loaded 1 PC tables (384346 PCs): 384346 [0x2c4a5a8,0x3227b48), 00:08:21.089 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:08:21.089 INFO: A corpus is not provided, starting from an empty corpus 00:08:21.090 #2 INITED exec/s: 0 rss: 66Mb 00:08:21.090 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:21.090 This may also happen if the target rejected all inputs we tried so far 00:08:21.090 [2024-10-09 00:15:51.686408] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:3327647950551526958 len:11823 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:21.090 [2024-10-09 00:15:51.686441] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:21.619 NEW_FUNC[1/716]: 0x455a88 in fuzz_nvm_write_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:540 00:08:21.619 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:21.619 #4 NEW cov: 12296 ft: 12289 corp: 2/27b lim: 120 exec/s: 0 rss: 73Mb L: 26/26 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:08:21.619 [2024-10-09 00:15:52.027295] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:3327647950551526958 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:21.619 [2024-10-09 00:15:52.027330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:21.619 [2024-10-09 00:15:52.027387] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:21.619 [2024-10-09 00:15:52.027403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:21.619 #5 NEW cov: 12409 ft: 13697 corp: 3/92b lim: 120 exec/s: 0 rss: 74Mb L: 65/65 MS: 1 InsertRepeatedBytes- 00:08:21.619 [2024-10-09 00:15:52.087721] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069615910911 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:21.619 [2024-10-09 00:15:52.087750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:21.619 [2024-10-09 00:15:52.087785] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:21.619 [2024-10-09 00:15:52.087802] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:21.619 [2024-10-09 00:15:52.087873] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:21.619 [2024-10-09 00:15:52.087889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:21.619 [2024-10-09 00:15:52.087944] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:21.619 [2024-10-09 00:15:52.087961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:21.619 #10 NEW cov: 12415 ft: 14275 corp: 4/208b lim: 120 exec/s: 0 rss: 74Mb L: 116/116 MS: 5 CopyPart-ChangeBit-CopyPart-ShuffleBytes-InsertRepeatedBytes- 00:08:21.619 [2024-10-09 00:15:52.127623] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:3255307777135881517 len:11566 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:21.619 [2024-10-09 00:15:52.127649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:21.619 [2024-10-09 00:15:52.127699] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:3255307777713450285 len:11566 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:21.619 [2024-10-09 00:15:52.127716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:21.619 [2024-10-09 00:15:52.127769] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:3255307777713450285 len:11566 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:21.619 [2024-10-09 00:15:52.127783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:21.619 #13 NEW cov: 12500 ft: 14824 corp: 5/303b lim: 120 exec/s: 0 rss: 74Mb L: 95/116 MS: 3 InsertByte-ShuffleBytes-InsertRepeatedBytes- 00:08:21.619 [2024-10-09 00:15:52.167445] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:3327647950551526958 len:11823 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:21.619 [2024-10-09 00:15:52.167472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:21.619 #14 NEW cov: 12500 ft: 14937 corp: 6/329b lim: 120 exec/s: 0 rss: 74Mb L: 26/116 MS: 1 CopyPart- 00:08:21.619 [2024-10-09 00:15:52.207696] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:3327647950551526958 len:11823 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:21.619 [2024-10-09 00:15:52.207721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:21.619 [2024-10-09 00:15:52.207756] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:3327647950551526958 len:11823 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:21.619 [2024-10-09 00:15:52.207772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:21.619 #20 NEW cov: 12500 ft: 15021 corp: 7/381b lim: 120 exec/s: 0 rss: 74Mb L: 52/116 MS: 1 CopyPart- 00:08:21.619 [2024-10-09 00:15:52.247985] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:3255307777135881517 len:11566 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:21.619 [2024-10-09 00:15:52.248012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:21.619 [2024-10-09 00:15:52.248055] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:3255307777713450285 len:11566 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:21.619 [2024-10-09 00:15:52.248071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:21.619 [2024-10-09 00:15:52.248122] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:3255307777713450285 len:11566 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:21.619 [2024-10-09 00:15:52.248138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:21.877 #21 NEW cov: 12500 ft: 15119 corp: 8/476b lim: 120 exec/s: 0 rss: 74Mb L: 95/116 MS: 1 ChangeBinInt- 00:08:21.877 [2024-10-09 00:15:52.308285] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069615910911 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:21.877 [2024-10-09 00:15:52.308312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:21.877 [2024-10-09 00:15:52.308361] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:21.877 [2024-10-09 00:15:52.308377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:21.877 [2024-10-09 00:15:52.308429] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:21.877 [2024-10-09 00:15:52.308443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:21.877 [2024-10-09 00:15:52.308496] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:21.877 [2024-10-09 00:15:52.308511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:21.877 #22 NEW cov: 12500 ft: 15218 corp: 9/592b lim: 120 exec/s: 0 rss: 74Mb L: 116/116 MS: 1 ShuffleBytes- 00:08:21.877 [2024-10-09 00:15:52.368432] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069615910911 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:21.877 [2024-10-09 00:15:52.368458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:21.877 [2024-10-09 00:15:52.368502] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:21.877 [2024-10-09 00:15:52.368522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:21.877 [2024-10-09 00:15:52.368574] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:21.877 [2024-10-09 00:15:52.368590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:21.877 [2024-10-09 00:15:52.368643] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:21.877 [2024-10-09 00:15:52.368658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:21.877 #23 NEW cov: 12500 ft: 15356 corp: 10/708b lim: 120 exec/s: 0 rss: 74Mb L: 116/116 MS: 1 ChangeByte- 00:08:21.878 [2024-10-09 00:15:52.408087] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:3327647950551526958 len:2607 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:21.878 [2024-10-09 00:15:52.408114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:21.878 #24 NEW cov: 12500 ft: 15439 corp: 11/735b lim: 120 exec/s: 0 rss: 74Mb L: 27/116 MS: 1 CrossOver- 00:08:21.878 [2024-10-09 00:15:52.448249] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:3327647950551526958 len:2607 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:21.878 [2024-10-09 00:15:52.448275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:21.878 #25 NEW cov: 12500 ft: 15480 corp: 12/763b lim: 120 exec/s: 0 rss: 74Mb L: 28/116 MS: 1 InsertByte- 00:08:21.878 [2024-10-09 00:15:52.508799] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069615910911 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:21.878 [2024-10-09 00:15:52.508834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:21.878 [2024-10-09 00:15:52.508880] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:21.878 [2024-10-09 00:15:52.508895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:21.878 [2024-10-09 00:15:52.508949] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:21.878 [2024-10-09 00:15:52.508965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:22.136 #26 NEW cov: 12500 ft: 15551 corp: 13/843b lim: 120 exec/s: 0 rss: 74Mb L: 80/116 MS: 1 EraseBytes- 00:08:22.136 [2024-10-09 00:15:52.569050] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:3255307777135881517 len:11566 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:22.136 [2024-10-09 00:15:52.569076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:22.136 [2024-10-09 00:15:52.569144] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:3255307777713450285 len:11566 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:22.136 [2024-10-09 00:15:52.569160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:22.136 [2024-10-09 00:15:52.569213] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:3255307777713450285 len:11566 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:22.136 [2024-10-09 00:15:52.569228] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:22.136 [2024-10-09 00:15:52.569280] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:3255307777713450285 len:11310 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:22.136 [2024-10-09 00:15:52.569297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:22.136 NEW_FUNC[1/1]: 0x1bfc8d8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:658 00:08:22.136 #27 NEW cov: 12523 ft: 15593 corp: 14/939b lim: 120 exec/s: 0 rss: 74Mb L: 96/116 MS: 1 InsertByte- 00:08:22.136 [2024-10-09 00:15:52.629193] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069615910911 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:22.136 [2024-10-09 00:15:52.629223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:22.136 [2024-10-09 00:15:52.629265] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:22.136 [2024-10-09 00:15:52.629280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:22.136 [2024-10-09 00:15:52.629332] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:22.136 [2024-10-09 00:15:52.629348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:22.136 [2024-10-09 00:15:52.629401] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:22.136 [2024-10-09 00:15:52.629417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:22.136 #28 NEW cov: 12523 ft: 15625 corp: 15/1055b lim: 120 exec/s: 0 rss: 74Mb L: 116/116 MS: 1 ChangeByte- 00:08:22.136 [2024-10-09 00:15:52.669016] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:3327647950551526958 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:22.136 [2024-10-09 00:15:52.669044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:22.136 [2024-10-09 00:15:52.669086] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:22.136 [2024-10-09 00:15:52.669102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:22.136 #29 NEW cov: 12523 ft: 15657 corp: 16/1121b lim: 120 exec/s: 29 rss: 74Mb L: 66/116 MS: 1 InsertByte- 00:08:22.136 [2024-10-09 00:15:52.729317] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069615910911 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:22.136 [2024-10-09 00:15:52.729345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:22.136 [2024-10-09 00:15:52.729384] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:22.136 [2024-10-09 00:15:52.729400] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:22.136 [2024-10-09 00:15:52.729452] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:22.136 [2024-10-09 00:15:52.729468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:22.394 #30 NEW cov: 12523 ft: 15685 corp: 17/1199b lim: 120 exec/s: 30 rss: 74Mb L: 78/116 MS: 1 EraseBytes- 00:08:22.394 [2024-10-09 00:15:52.789639] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069615910911 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:22.394 [2024-10-09 00:15:52.789669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:22.394 [2024-10-09 00:15:52.789721] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:22.394 [2024-10-09 00:15:52.789737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:22.394 [2024-10-09 00:15:52.789788] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:22.394 [2024-10-09 00:15:52.789804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:22.394 [2024-10-09 00:15:52.789863] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:22.394 [2024-10-09 00:15:52.789879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:22.394 #31 NEW cov: 12523 ft: 15705 corp: 18/1316b lim: 120 exec/s: 31 rss: 74Mb L: 117/117 MS: 1 InsertByte- 00:08:22.394 [2024-10-09 00:15:52.829757] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069615910911 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:22.394 [2024-10-09 00:15:52.829785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:22.394 [2024-10-09 00:15:52.829835] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:22.395 [2024-10-09 00:15:52.829851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:22.395 [2024-10-09 00:15:52.829920] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:22.395 [2024-10-09 00:15:52.829937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:22.395 [2024-10-09 00:15:52.829989] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:22.395 [2024-10-09 00:15:52.830004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:22.395 #32 NEW cov: 12523 ft: 15760 corp: 19/1434b lim: 120 exec/s: 32 rss: 74Mb L: 118/118 MS: 1 InsertByte- 00:08:22.395 [2024-10-09 00:15:52.889766] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:22.395 [2024-10-09 00:15:52.889794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:22.395 [2024-10-09 00:15:52.889840] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:22.395 [2024-10-09 00:15:52.889856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:22.395 [2024-10-09 00:15:52.889908] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:11823 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:22.395 [2024-10-09 00:15:52.889926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:22.395 #34 NEW cov: 12523 ft: 15786 corp: 20/1508b lim: 120 exec/s: 34 rss: 74Mb L: 74/118 MS: 2 EraseBytes-InsertRepeatedBytes- 00:08:22.395 [2024-10-09 00:15:52.949939] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069615910911 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:22.395 [2024-10-09 00:15:52.949969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:22.395 [2024-10-09 00:15:52.950014] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:22.395 [2024-10-09 00:15:52.950029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:22.395 [2024-10-09 00:15:52.950082] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:22.395 [2024-10-09 00:15:52.950099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:22.395 #35 NEW cov: 12523 ft: 15796 corp: 21/1586b lim: 120 exec/s: 35 rss: 75Mb L: 78/118 MS: 1 CopyPart- 00:08:22.395 [2024-10-09 00:15:53.009954] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:3327647950551526958 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:22.395 [2024-10-09 00:15:53.009981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:22.395 [2024-10-09 00:15:53.010031] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:22.395 [2024-10-09 00:15:53.010048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:22.653 #36 NEW cov: 12523 ft: 15811 corp: 22/1652b lim: 120 exec/s: 36 rss: 75Mb L: 66/118 MS: 1 ChangeBit- 00:08:22.653 [2024-10-09 00:15:53.070002] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:3327647950551526958 len:2607 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:22.653 [2024-10-09 00:15:53.070030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:22.653 #37 NEW cov: 12523 ft: 15820 corp: 23/1679b lim: 120 exec/s: 37 rss: 75Mb L: 27/118 MS: 1 ChangeBinInt- 00:08:22.653 [2024-10-09 00:15:53.110353] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:3255307777135881517 len:11566 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:22.653 [2024-10-09 00:15:53.110380] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:22.653 [2024-10-09 00:15:53.110441] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:3255307777713450285 len:11566 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:22.653 [2024-10-09 00:15:53.110458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:22.653 [2024-10-09 00:15:53.110510] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:3255307777713450285 len:11566 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:22.653 [2024-10-09 00:15:53.110526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:22.653 #38 NEW cov: 12523 ft: 15838 corp: 24/1774b lim: 120 exec/s: 38 rss: 75Mb L: 95/118 MS: 1 ChangeBit- 00:08:22.653 [2024-10-09 00:15:53.150615] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:3255307777135881517 len:11566 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:22.653 [2024-10-09 00:15:53.150642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:22.653 [2024-10-09 00:15:53.150690] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:2097882673209744669 len:11566 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:22.653 [2024-10-09 00:15:53.150705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:22.653 [2024-10-09 00:15:53.150762] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:3255307777713450285 len:11566 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:22.653 [2024-10-09 00:15:53.150778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:22.653 [2024-10-09 00:15:53.150836] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:3255307777713450285 len:11566 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:22.653 [2024-10-09 00:15:53.150869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:22.653 #39 NEW cov: 12523 ft: 15877 corp: 25/1882b lim: 120 exec/s: 39 rss: 75Mb L: 108/118 MS: 1 InsertRepeatedBytes- 00:08:22.653 [2024-10-09 00:15:53.210848] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069615910911 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:22.653 [2024-10-09 00:15:53.210875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:22.653 [2024-10-09 00:15:53.210930] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:22.653 [2024-10-09 00:15:53.210944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:22.653 [2024-10-09 00:15:53.210995] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:22.653 [2024-10-09 00:15:53.211011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:22.653 [2024-10-09 00:15:53.211064] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:22.653 [2024-10-09 00:15:53.211080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:22.653 #40 NEW cov: 12523 ft: 15881 corp: 26/1998b lim: 120 exec/s: 40 rss: 75Mb L: 116/118 MS: 1 ChangeBit- 00:08:22.653 [2024-10-09 00:15:53.250781] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069615910911 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:22.653 [2024-10-09 00:15:53.250808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:22.653 [2024-10-09 00:15:53.250857] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:22.653 [2024-10-09 00:15:53.250873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:22.653 [2024-10-09 00:15:53.250926] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:22.653 [2024-10-09 00:15:53.250942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:22.654 #41 NEW cov: 12523 ft: 15888 corp: 27/2075b lim: 120 exec/s: 41 rss: 75Mb L: 77/118 MS: 1 EraseBytes- 00:08:22.919 [2024-10-09 00:15:53.291128] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069615910911 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:22.919 [2024-10-09 00:15:53.291156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:22.919 [2024-10-09 00:15:53.291206] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:22.919 [2024-10-09 00:15:53.291223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:22.919 [2024-10-09 00:15:53.291281] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:22.919 [2024-10-09 00:15:53.291297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:22.919 [2024-10-09 00:15:53.291349] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:281470681743360 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:22.919 [2024-10-09 00:15:53.291366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:22.919 #42 NEW cov: 12523 ft: 15900 corp: 28/2191b lim: 120 exec/s: 42 rss: 75Mb L: 116/118 MS: 1 ChangeBinInt- 00:08:22.919 [2024-10-09 00:15:53.351204] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069615910911 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:22.919 [2024-10-09 00:15:53.351231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:22.919 [2024-10-09 00:15:53.351278] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:22.919 [2024-10-09 00:15:53.351293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:22.919 [2024-10-09 00:15:53.351345] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:22.919 [2024-10-09 00:15:53.351358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:22.919 [2024-10-09 00:15:53.351412] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:22.919 [2024-10-09 00:15:53.351427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:22.919 #43 NEW cov: 12523 ft: 15910 corp: 29/2307b lim: 120 exec/s: 43 rss: 75Mb L: 116/118 MS: 1 ShuffleBytes- 00:08:22.919 [2024-10-09 00:15:53.390997] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:3327647950551330350 len:12032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:22.919 [2024-10-09 00:15:53.391023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:22.919 [2024-10-09 00:15:53.391087] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:22.919 [2024-10-09 00:15:53.391103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:22.919 #44 NEW cov: 12523 ft: 15921 corp: 30/2373b lim: 120 exec/s: 44 rss: 75Mb L: 66/118 MS: 1 InsertByte- 00:08:22.919 [2024-10-09 00:15:53.431431] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:3327647950551330350 len:12032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:22.919 [2024-10-09 00:15:53.431457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:22.920 [2024-10-09 00:15:53.431505] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:14323354221939181254 len:50887 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:22.920 [2024-10-09 00:15:53.431520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:22.920 [2024-10-09 00:15:53.431571] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:14323354221939181254 len:50944 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:22.920 [2024-10-09 00:15:53.431587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:22.920 [2024-10-09 00:15:53.431645] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:22.920 [2024-10-09 00:15:53.431661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:22.920 #45 NEW cov: 12523 ft: 15933 corp: 31/2475b lim: 120 exec/s: 45 rss: 75Mb L: 102/118 MS: 1 InsertRepeatedBytes- 00:08:22.920 [2024-10-09 00:15:53.491390] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069615910911 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:22.920 [2024-10-09 00:15:53.491417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:22.920 [2024-10-09 00:15:53.491461] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:22.920 [2024-10-09 00:15:53.491477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:22.920 [2024-10-09 00:15:53.491530] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:22.920 [2024-10-09 00:15:53.491546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:22.920 #46 NEW cov: 12523 ft: 15950 corp: 32/2552b lim: 120 exec/s: 46 rss: 75Mb L: 77/118 MS: 1 ShuffleBytes- 00:08:22.920 [2024-10-09 00:15:53.551411] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:3327647950551526958 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:22.920 [2024-10-09 00:15:53.551438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:22.920 [2024-10-09 00:15:53.551474] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446603336221196287 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:22.920 [2024-10-09 00:15:53.551490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:23.180 #47 NEW cov: 12523 ft: 15974 corp: 33/2618b lim: 120 exec/s: 47 rss: 75Mb L: 66/118 MS: 1 ChangeBit- 00:08:23.180 [2024-10-09 00:15:53.611577] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:3327647950551526958 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.180 [2024-10-09 00:15:53.611603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:23.180 [2024-10-09 00:15:53.611639] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:12884901887 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.180 [2024-10-09 00:15:53.611655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:23.180 #48 NEW cov: 12523 ft: 15975 corp: 34/2684b lim: 120 exec/s: 48 rss: 75Mb L: 66/118 MS: 1 ChangeBinInt- 00:08:23.180 [2024-10-09 00:15:53.651856] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069615910911 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.180 [2024-10-09 00:15:53.651882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:23.180 [2024-10-09 00:15:53.651945] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.180 [2024-10-09 00:15:53.651961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:23.180 [2024-10-09 00:15:53.652016] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.180 [2024-10-09 00:15:53.652035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:23.180 #49 NEW cov: 12523 ft: 15988 corp: 35/2764b lim: 120 exec/s: 24 rss: 75Mb L: 80/118 MS: 1 ChangeBinInt- 00:08:23.180 #49 DONE cov: 12523 ft: 15988 corp: 35/2764b lim: 120 exec/s: 24 rss: 75Mb 00:08:23.180 Done 49 runs in 2 second(s) 00:08:23.438 00:15:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_17.conf /var/tmp/suppress_nvmf_fuzz 00:08:23.438 00:15:53 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:23.438 00:15:53 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:23.438 00:15:53 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 18 1 0x1 00:08:23.438 00:15:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=18 00:08:23.438 00:15:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:23.438 00:15:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:23.438 00:15:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:08:23.438 00:15:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_18.conf 00:08:23.438 00:15:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:23.438 00:15:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:23.438 00:15:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 18 00:08:23.438 00:15:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4418 00:08:23.438 00:15:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:08:23.438 00:15:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' 00:08:23.438 00:15:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4418"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:23.438 00:15:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:23.438 00:15:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:23.438 00:15:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' -c /tmp/fuzz_json_18.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 -Z 18 00:08:23.438 [2024-10-09 00:15:53.882214] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:08:23.438 [2024-10-09 00:15:53.882295] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3891629 ] 00:08:23.696 [2024-10-09 00:15:54.086016] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.696 [2024-10-09 00:15:54.160544] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.696 [2024-10-09 00:15:54.219825] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:23.696 [2024-10-09 00:15:54.236072] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4418 *** 00:08:23.696 INFO: Running with entropic power schedule (0xFF, 100). 00:08:23.696 INFO: Seed: 3742314908 00:08:23.696 INFO: Loaded 1 modules (384346 inline 8-bit counters): 384346 [0x2bec84c, 0x2c4a5a6), 00:08:23.696 INFO: Loaded 1 PC tables (384346 PCs): 384346 [0x2c4a5a8,0x3227b48), 00:08:23.696 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:08:23.696 INFO: A corpus is not provided, starting from an empty corpus 00:08:23.696 #2 INITED exec/s: 0 rss: 66Mb 00:08:23.696 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:23.696 This may also happen if the target rejected all inputs we tried so far 00:08:23.696 [2024-10-09 00:15:54.281458] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:23.696 [2024-10-09 00:15:54.281489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:23.696 [2024-10-09 00:15:54.281542] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:23.696 [2024-10-09 00:15:54.281556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:24.261 NEW_FUNC[1/714]: 0x459378 in fuzz_nvm_write_zeroes_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:562 00:08:24.261 NEW_FUNC[2/714]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:24.261 #12 NEW cov: 12239 ft: 12238 corp: 2/51b lim: 100 exec/s: 0 rss: 73Mb L: 50/50 MS: 5 ChangeBit-ChangeByte-ChangeByte-ChangeByte-InsertRepeatedBytes- 00:08:24.261 [2024-10-09 00:15:54.622649] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:24.261 [2024-10-09 00:15:54.622690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.261 [2024-10-09 00:15:54.622741] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:24.261 [2024-10-09 00:15:54.622756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:24.261 [2024-10-09 00:15:54.622806] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:24.261 [2024-10-09 00:15:54.622825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:24.261 [2024-10-09 00:15:54.622893] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:24.261 [2024-10-09 00:15:54.622907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:24.261 #18 NEW cov: 12352 ft: 13263 corp: 3/133b lim: 100 exec/s: 0 rss: 74Mb L: 82/82 MS: 1 InsertRepeatedBytes- 00:08:24.261 [2024-10-09 00:15:54.682534] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:24.261 [2024-10-09 00:15:54.682562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.261 [2024-10-09 00:15:54.682611] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:24.261 [2024-10-09 00:15:54.682626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:24.262 #19 NEW cov: 12358 ft: 13488 corp: 4/183b lim: 100 exec/s: 0 rss: 74Mb L: 50/82 MS: 1 ChangeBinInt- 00:08:24.262 [2024-10-09 00:15:54.722674] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:24.262 [2024-10-09 00:15:54.722701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.262 [2024-10-09 00:15:54.722739] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:24.262 [2024-10-09 00:15:54.722754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:24.262 [2024-10-09 00:15:54.722806] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:24.262 [2024-10-09 00:15:54.722828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:24.262 #26 NEW cov: 12443 ft: 13967 corp: 5/256b lim: 100 exec/s: 0 rss: 74Mb L: 73/82 MS: 2 CrossOver-InsertRepeatedBytes- 00:08:24.262 [2024-10-09 00:15:54.762728] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:24.262 [2024-10-09 00:15:54.762760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.262 [2024-10-09 00:15:54.762820] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:24.262 [2024-10-09 00:15:54.762835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:24.262 #27 NEW cov: 12443 ft: 14092 corp: 6/306b lim: 100 exec/s: 0 rss: 74Mb L: 50/82 MS: 1 ChangeBit- 00:08:24.262 [2024-10-09 00:15:54.802700] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:24.262 [2024-10-09 00:15:54.802726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.262 #28 NEW cov: 12443 ft: 14472 corp: 7/331b lim: 100 exec/s: 0 rss: 74Mb L: 25/82 MS: 1 EraseBytes- 00:08:24.262 [2024-10-09 00:15:54.842914] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:24.262 [2024-10-09 00:15:54.842940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.262 [2024-10-09 00:15:54.842984] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:24.262 [2024-10-09 00:15:54.842999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:24.262 #29 NEW cov: 12443 ft: 14528 corp: 8/389b lim: 100 exec/s: 0 rss: 74Mb L: 58/82 MS: 1 CMP- DE: "\252O\247D\247\030'\000"- 00:08:24.262 [2024-10-09 00:15:54.883037] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:24.262 [2024-10-09 00:15:54.883063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.262 [2024-10-09 00:15:54.883127] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:24.262 [2024-10-09 00:15:54.883143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:24.519 #30 NEW cov: 12443 ft: 14610 corp: 9/434b lim: 100 exec/s: 0 rss: 74Mb L: 45/82 MS: 1 InsertRepeatedBytes- 00:08:24.519 [2024-10-09 00:15:54.923134] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:24.519 [2024-10-09 00:15:54.923159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.519 [2024-10-09 00:15:54.923203] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:24.519 [2024-10-09 00:15:54.923217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:24.519 #31 NEW cov: 12443 ft: 14648 corp: 10/492b lim: 100 exec/s: 0 rss: 74Mb L: 58/82 MS: 1 CMP- DE: "\001\002\000\000"- 00:08:24.519 [2024-10-09 00:15:54.983367] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:24.519 [2024-10-09 00:15:54.983395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.519 [2024-10-09 00:15:54.983430] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:24.519 [2024-10-09 00:15:54.983445] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:24.519 #32 NEW cov: 12443 ft: 14767 corp: 11/550b lim: 100 exec/s: 0 rss: 74Mb L: 58/82 MS: 1 CrossOver- 00:08:24.520 [2024-10-09 00:15:55.043405] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:24.520 [2024-10-09 00:15:55.043431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.520 #33 NEW cov: 12443 ft: 14811 corp: 12/575b lim: 100 exec/s: 0 rss: 74Mb L: 25/82 MS: 1 ChangeBit- 00:08:24.520 [2024-10-09 00:15:55.103747] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:24.520 [2024-10-09 00:15:55.103772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.520 [2024-10-09 00:15:55.103818] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:24.520 [2024-10-09 00:15:55.103833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:24.520 [2024-10-09 00:15:55.103901] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:24.520 [2024-10-09 00:15:55.103916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:24.520 #34 NEW cov: 12443 ft: 14826 corp: 13/648b lim: 100 exec/s: 0 rss: 74Mb L: 73/82 MS: 1 CopyPart- 00:08:24.783 [2024-10-09 00:15:55.163758] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:24.783 [2024-10-09 00:15:55.163784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.783 NEW_FUNC[1/1]: 0x1bfc8d8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:658 00:08:24.783 #35 NEW cov: 12466 ft: 14942 corp: 14/682b lim: 100 exec/s: 0 rss: 74Mb L: 34/82 MS: 1 EraseBytes- 00:08:24.783 [2024-10-09 00:15:55.203935] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:24.783 [2024-10-09 00:15:55.203960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.783 [2024-10-09 00:15:55.203996] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:24.783 [2024-10-09 00:15:55.204010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:24.783 #36 NEW cov: 12466 ft: 14963 corp: 15/740b lim: 100 exec/s: 0 rss: 74Mb L: 58/82 MS: 1 ChangeByte- 00:08:24.783 [2024-10-09 00:15:55.244287] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:24.783 [2024-10-09 00:15:55.244312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.783 [2024-10-09 00:15:55.244365] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:24.783 [2024-10-09 00:15:55.244378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:24.783 [2024-10-09 00:15:55.244426] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:24.783 [2024-10-09 00:15:55.244440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:24.783 [2024-10-09 00:15:55.244492] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:24.783 [2024-10-09 00:15:55.244506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:24.783 #42 NEW cov: 12466 ft: 15008 corp: 16/823b lim: 100 exec/s: 42 rss: 74Mb L: 83/83 MS: 1 InsertRepeatedBytes- 00:08:24.783 [2024-10-09 00:15:55.304328] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:24.783 [2024-10-09 00:15:55.304353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.783 [2024-10-09 00:15:55.304397] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:24.783 [2024-10-09 00:15:55.304412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:24.783 [2024-10-09 00:15:55.304466] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:24.783 [2024-10-09 00:15:55.304481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:24.783 #43 NEW cov: 12466 ft: 15018 corp: 17/896b lim: 100 exec/s: 43 rss: 74Mb L: 73/83 MS: 1 ChangeBit- 00:08:24.783 [2024-10-09 00:15:55.364662] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:24.783 [2024-10-09 00:15:55.364689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.783 [2024-10-09 00:15:55.364726] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:24.783 [2024-10-09 00:15:55.364741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:24.783 [2024-10-09 00:15:55.364792] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:24.783 [2024-10-09 00:15:55.364807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:24.783 [2024-10-09 00:15:55.364867] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:24.783 [2024-10-09 00:15:55.364883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:24.783 #44 NEW cov: 12466 ft: 15037 corp: 18/986b lim: 100 exec/s: 44 rss: 74Mb L: 90/90 MS: 1 InsertRepeatedBytes- 00:08:24.783 [2024-10-09 00:15:55.404643] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:24.783 [2024-10-09 00:15:55.404669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.783 [2024-10-09 00:15:55.404703] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:24.783 [2024-10-09 00:15:55.404717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:24.783 [2024-10-09 00:15:55.404768] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:24.783 [2024-10-09 00:15:55.404782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:25.044 #45 NEW cov: 12466 ft: 15055 corp: 19/1059b lim: 100 exec/s: 45 rss: 74Mb L: 73/90 MS: 1 ChangeBit- 00:08:25.044 [2024-10-09 00:15:55.444869] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:25.044 [2024-10-09 00:15:55.444895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:25.044 [2024-10-09 00:15:55.444956] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:25.044 [2024-10-09 00:15:55.444971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:25.044 [2024-10-09 00:15:55.445032] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:25.044 [2024-10-09 00:15:55.445045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:25.044 [2024-10-09 00:15:55.445097] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:25.044 [2024-10-09 00:15:55.445127] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:25.044 #46 NEW cov: 12466 ft: 15066 corp: 20/1142b lim: 100 exec/s: 46 rss: 74Mb L: 83/90 MS: 1 ChangeByte- 00:08:25.044 [2024-10-09 00:15:55.505029] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:25.044 [2024-10-09 00:15:55.505055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:25.044 [2024-10-09 00:15:55.505100] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:25.044 [2024-10-09 00:15:55.505116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:25.044 [2024-10-09 00:15:55.505166] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:25.044 [2024-10-09 00:15:55.505180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:25.044 [2024-10-09 00:15:55.505233] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:25.044 [2024-10-09 00:15:55.505247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:25.044 #47 NEW cov: 12466 ft: 15111 corp: 21/1224b lim: 100 exec/s: 47 rss: 75Mb L: 82/90 MS: 1 ChangeBit- 00:08:25.045 [2024-10-09 00:15:55.565092] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:25.045 [2024-10-09 00:15:55.565120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:25.045 [2024-10-09 00:15:55.565161] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:25.045 [2024-10-09 00:15:55.565176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:25.045 [2024-10-09 00:15:55.565228] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:25.045 [2024-10-09 00:15:55.565243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:25.045 #48 NEW cov: 12466 ft: 15175 corp: 22/1301b lim: 100 exec/s: 48 rss: 75Mb L: 77/90 MS: 1 PersAutoDict- DE: "\001\002\000\000"- 00:08:25.045 [2024-10-09 00:15:55.605337] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:25.045 [2024-10-09 00:15:55.605365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:25.045 [2024-10-09 00:15:55.605410] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:25.045 [2024-10-09 00:15:55.605425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:25.045 [2024-10-09 00:15:55.605475] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:25.045 [2024-10-09 00:15:55.605490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:25.045 [2024-10-09 00:15:55.605541] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:25.045 [2024-10-09 00:15:55.605555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:25.045 #49 NEW cov: 12466 ft: 15187 corp: 23/1388b lim: 100 exec/s: 49 rss: 75Mb L: 87/90 MS: 1 CopyPart- 00:08:25.045 [2024-10-09 00:15:55.665503] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:25.045 [2024-10-09 00:15:55.665528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:25.045 [2024-10-09 00:15:55.665573] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:25.045 [2024-10-09 00:15:55.665587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:25.045 [2024-10-09 00:15:55.665638] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:25.045 [2024-10-09 00:15:55.665654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:25.045 [2024-10-09 00:15:55.665712] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:25.045 [2024-10-09 00:15:55.665726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:25.303 #50 NEW cov: 12466 ft: 15222 corp: 24/1473b lim: 100 exec/s: 50 rss: 75Mb L: 85/90 MS: 1 CopyPart- 00:08:25.303 [2024-10-09 00:15:55.725328] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:25.303 [2024-10-09 00:15:55.725353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:25.303 #54 NEW cov: 12466 ft: 15227 corp: 25/1500b lim: 100 exec/s: 54 rss: 75Mb L: 27/90 MS: 4 CopyPart-InsertByte-ChangeBinInt-InsertRepeatedBytes- 00:08:25.303 [2024-10-09 00:15:55.765673] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:25.303 [2024-10-09 00:15:55.765699] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:25.303 [2024-10-09 00:15:55.765745] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:25.303 [2024-10-09 00:15:55.765760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:25.303 [2024-10-09 00:15:55.765816] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:25.303 [2024-10-09 00:15:55.765832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:25.303 #56 NEW cov: 12466 ft: 15233 corp: 26/1563b lim: 100 exec/s: 56 rss: 75Mb L: 63/90 MS: 2 InsertRepeatedBytes-InsertRepeatedBytes- 00:08:25.303 [2024-10-09 00:15:55.805802] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:25.303 [2024-10-09 00:15:55.805836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:25.303 [2024-10-09 00:15:55.805879] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:25.303 [2024-10-09 00:15:55.805893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:25.303 [2024-10-09 00:15:55.805946] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:25.303 [2024-10-09 00:15:55.805960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:25.303 #57 NEW cov: 12466 ft: 15256 corp: 27/1636b lim: 100 exec/s: 57 rss: 75Mb L: 73/90 MS: 1 ShuffleBytes- 00:08:25.303 [2024-10-09 00:15:55.846024] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:25.303 [2024-10-09 00:15:55.846050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:25.303 [2024-10-09 00:15:55.846095] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:25.303 [2024-10-09 00:15:55.846111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:25.303 [2024-10-09 00:15:55.846163] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:25.303 [2024-10-09 00:15:55.846178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:25.303 [2024-10-09 00:15:55.846231] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:25.303 [2024-10-09 00:15:55.846245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:25.303 #58 NEW cov: 12466 ft: 15277 corp: 28/1726b lim: 100 exec/s: 58 rss: 75Mb L: 90/90 MS: 1 ChangeBit- 00:08:25.303 [2024-10-09 00:15:55.905919] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:25.303 [2024-10-09 00:15:55.905945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:25.303 [2024-10-09 00:15:55.905991] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:25.304 [2024-10-09 00:15:55.906005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:25.304 #59 NEW cov: 12466 ft: 15345 corp: 29/1784b lim: 100 exec/s: 59 rss: 75Mb L: 58/90 MS: 1 ChangeBit- 00:08:25.562 [2024-10-09 00:15:55.946141] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:25.562 [2024-10-09 00:15:55.946166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:25.562 [2024-10-09 00:15:55.946206] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:25.562 [2024-10-09 00:15:55.946221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:25.562 [2024-10-09 00:15:55.946271] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:25.562 [2024-10-09 00:15:55.946285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:25.562 #60 NEW cov: 12466 ft: 15367 corp: 30/1847b lim: 100 exec/s: 60 rss: 75Mb L: 63/90 MS: 1 ChangeBit- 00:08:25.562 [2024-10-09 00:15:56.006067] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:25.562 [2024-10-09 00:15:56.006092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:25.562 #61 NEW cov: 12466 ft: 15397 corp: 31/1874b lim: 100 exec/s: 61 rss: 75Mb L: 27/90 MS: 1 ChangeByte- 00:08:25.562 [2024-10-09 00:15:56.066466] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:25.562 [2024-10-09 00:15:56.066492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:25.562 [2024-10-09 00:15:56.066535] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:25.562 [2024-10-09 00:15:56.066550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:25.562 [2024-10-09 00:15:56.066602] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:25.562 [2024-10-09 00:15:56.066617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:25.562 #62 NEW cov: 12466 ft: 15399 corp: 32/1940b lim: 100 exec/s: 62 rss: 75Mb L: 66/90 MS: 1 CopyPart- 00:08:25.562 [2024-10-09 00:15:56.126514] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:25.562 [2024-10-09 00:15:56.126540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:25.562 [2024-10-09 00:15:56.126584] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:25.562 [2024-10-09 00:15:56.126597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:25.562 #63 NEW cov: 12466 ft: 15415 corp: 33/1989b lim: 100 exec/s: 63 rss: 75Mb L: 49/90 MS: 1 InsertRepeatedBytes- 00:08:25.562 [2024-10-09 00:15:56.186916] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:25.562 [2024-10-09 00:15:56.186941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:25.562 [2024-10-09 00:15:56.186993] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:25.562 [2024-10-09 00:15:56.187010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:25.562 [2024-10-09 00:15:56.187061] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:25.562 [2024-10-09 00:15:56.187075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:25.562 [2024-10-09 00:15:56.187128] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:25.562 [2024-10-09 00:15:56.187143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:25.824 #64 NEW cov: 12466 ft: 15422 corp: 34/2072b lim: 100 exec/s: 64 rss: 75Mb L: 83/90 MS: 1 InsertByte- 00:08:25.824 [2024-10-09 00:15:56.246995] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:25.824 [2024-10-09 00:15:56.247021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:25.824 [2024-10-09 00:15:56.247057] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:25.824 [2024-10-09 00:15:56.247072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:25.824 [2024-10-09 00:15:56.247125] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:25.824 [2024-10-09 00:15:56.247139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:25.824 #65 NEW cov: 12466 ft: 15459 corp: 35/2150b lim: 100 exec/s: 32 rss: 75Mb L: 78/90 MS: 1 InsertByte- 00:08:25.824 #65 DONE cov: 12466 ft: 15459 corp: 35/2150b lim: 100 exec/s: 32 rss: 75Mb 00:08:25.824 ###### Recommended dictionary. ###### 00:08:25.824 "\252O\247D\247\030'\000" # Uses: 0 00:08:25.824 "\001\002\000\000" # Uses: 1 00:08:25.824 ###### End of recommended dictionary. ###### 00:08:25.824 Done 65 runs in 2 second(s) 00:08:25.824 00:15:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_18.conf /var/tmp/suppress_nvmf_fuzz 00:08:25.824 00:15:56 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:25.824 00:15:56 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:25.824 00:15:56 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 19 1 0x1 00:08:25.824 00:15:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=19 00:08:25.824 00:15:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:25.824 00:15:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:25.824 00:15:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:08:25.824 00:15:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_19.conf 00:08:25.824 00:15:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:25.824 00:15:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:25.824 00:15:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 19 00:08:25.824 00:15:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4419 00:08:25.824 00:15:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:08:25.824 00:15:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' 00:08:25.824 00:15:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4419"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:25.824 00:15:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:25.824 00:15:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:25.824 00:15:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' -c /tmp/fuzz_json_19.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 -Z 19 00:08:26.133 [2024-10-09 00:15:56.476659] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:08:26.133 [2024-10-09 00:15:56.476725] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3891985 ] 00:08:26.133 [2024-10-09 00:15:56.680693] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.435 [2024-10-09 00:15:56.755492] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.435 [2024-10-09 00:15:56.815130] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:26.435 [2024-10-09 00:15:56.831365] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4419 *** 00:08:26.435 INFO: Running with entropic power schedule (0xFF, 100). 00:08:26.435 INFO: Seed: 2042376352 00:08:26.435 INFO: Loaded 1 modules (384346 inline 8-bit counters): 384346 [0x2bec84c, 0x2c4a5a6), 00:08:26.435 INFO: Loaded 1 PC tables (384346 PCs): 384346 [0x2c4a5a8,0x3227b48), 00:08:26.435 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:08:26.435 INFO: A corpus is not provided, starting from an empty corpus 00:08:26.435 #2 INITED exec/s: 0 rss: 66Mb 00:08:26.435 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:26.435 This may also happen if the target rejected all inputs we tried so far 00:08:26.435 [2024-10-09 00:15:56.886674] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:15480796983925855958 len:54999 00:08:26.435 [2024-10-09 00:15:56.886708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:26.435 [2024-10-09 00:15:56.886761] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:15480796987348408022 len:54999 00:08:26.435 [2024-10-09 00:15:56.886777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:26.720 NEW_FUNC[1/714]: 0x45c338 in fuzz_nvm_write_uncorrectable_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:582 00:08:26.720 NEW_FUNC[2/714]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:26.720 #23 NEW cov: 12217 ft: 12216 corp: 2/21b lim: 50 exec/s: 0 rss: 73Mb L: 20/20 MS: 1 InsertRepeatedBytes- 00:08:26.720 [2024-10-09 00:15:57.217617] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:15480796983912486614 len:54999 00:08:26.720 [2024-10-09 00:15:57.217655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:26.720 [2024-10-09 00:15:57.217709] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:15480796987348408022 len:54999 00:08:26.721 [2024-10-09 00:15:57.217725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:26.721 #26 NEW cov: 12330 ft: 12833 corp: 3/42b lim: 50 exec/s: 0 rss: 74Mb L: 21/21 MS: 3 CopyPart-ShuffleBytes-CrossOver- 00:08:26.721 [2024-10-09 00:15:57.257606] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:15480796983912486614 len:54999 00:08:26.721 [2024-10-09 00:15:57.257634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:26.721 [2024-10-09 00:15:57.257683] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:15480796987348408022 len:9175 00:08:26.721 [2024-10-09 00:15:57.257699] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:26.721 #27 NEW cov: 12336 ft: 12949 corp: 4/63b lim: 50 exec/s: 0 rss: 74Mb L: 21/21 MS: 1 ChangeByte- 00:08:26.721 [2024-10-09 00:15:57.317786] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:15423376088676882134 len:54999 00:08:26.721 [2024-10-09 00:15:57.317818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:26.721 [2024-10-09 00:15:57.317875] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:15480796987348408022 len:54999 00:08:26.721 [2024-10-09 00:15:57.317892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:26.721 #33 NEW cov: 12421 ft: 13275 corp: 5/84b lim: 50 exec/s: 0 rss: 74Mb L: 21/21 MS: 1 ShuffleBytes- 00:08:26.979 [2024-10-09 00:15:57.358032] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:15423376088676882134 len:54999 00:08:26.979 [2024-10-09 00:15:57.358060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:26.979 [2024-10-09 00:15:57.358102] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:15480572686976341718 len:54999 00:08:26.979 [2024-10-09 00:15:57.358118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:26.979 [2024-10-09 00:15:57.358169] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:15480796987348408022 len:54999 00:08:26.979 [2024-10-09 00:15:57.358185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:26.979 #34 NEW cov: 12421 ft: 13721 corp: 6/115b lim: 50 exec/s: 0 rss: 74Mb L: 31/31 MS: 1 CopyPart- 00:08:26.979 [2024-10-09 00:15:57.418160] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:15423376088676882134 len:54999 00:08:26.979 [2024-10-09 00:15:57.418186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:26.979 [2024-10-09 00:15:57.418222] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:15480572686976341718 len:54829 00:08:26.979 [2024-10-09 00:15:57.418238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:26.979 [2024-10-09 00:15:57.418289] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:15480796987348408022 len:54999 00:08:26.979 [2024-10-09 00:15:57.418306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:26.979 #35 NEW cov: 12421 ft: 13773 corp: 7/147b lim: 50 exec/s: 0 rss: 74Mb L: 32/32 MS: 1 InsertByte- 00:08:26.979 [2024-10-09 00:15:57.478208] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:352321536 len:54999 00:08:26.979 [2024-10-09 00:15:57.478234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:26.979 [2024-10-09 00:15:57.478286] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:15480796987348408022 len:54999 00:08:26.979 [2024-10-09 00:15:57.478318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:26.979 #36 NEW cov: 12421 ft: 13844 corp: 8/168b lim: 50 exec/s: 0 rss: 74Mb L: 21/32 MS: 1 ChangeBinInt- 00:08:26.979 [2024-10-09 00:15:57.518329] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:15480796983920364246 len:54999 00:08:26.979 [2024-10-09 00:15:57.518356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:26.979 [2024-10-09 00:15:57.518411] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:15480796987348408022 len:54820 00:08:26.979 [2024-10-09 00:15:57.518427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:26.979 #37 NEW cov: 12421 ft: 13913 corp: 9/190b lim: 50 exec/s: 0 rss: 74Mb L: 22/32 MS: 1 InsertByte- 00:08:26.979 [2024-10-09 00:15:57.578478] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:15423376088676882134 len:54999 00:08:26.979 [2024-10-09 00:15:57.578505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:26.979 [2024-10-09 00:15:57.578555] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:15480572686976341718 len:54999 00:08:26.979 [2024-10-09 00:15:57.578571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:26.979 #38 NEW cov: 12421 ft: 14005 corp: 10/217b lim: 50 exec/s: 0 rss: 74Mb L: 27/32 MS: 1 EraseBytes- 00:08:27.238 [2024-10-09 00:15:57.618636] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:15480613721960811222 len:10538 00:08:27.238 [2024-10-09 00:15:57.618664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:27.238 [2024-10-09 00:15:57.618711] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:15480796987348408022 len:54820 00:08:27.238 [2024-10-09 00:15:57.618728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:27.238 #39 NEW cov: 12421 ft: 14105 corp: 11/239b lim: 50 exec/s: 0 rss: 74Mb L: 22/32 MS: 1 ChangeBinInt- 00:08:27.238 [2024-10-09 00:15:57.678791] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:15480613721960811222 len:10711 00:08:27.238 [2024-10-09 00:15:57.678822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:27.238 [2024-10-09 00:15:57.678870] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:15480796987337070294 len:54820 00:08:27.238 [2024-10-09 00:15:57.678887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:27.238 #40 NEW cov: 12421 ft: 14120 corp: 12/261b lim: 50 exec/s: 0 rss: 74Mb L: 22/32 MS: 1 ShuffleBytes- 00:08:27.238 [2024-10-09 00:15:57.738932] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:352321536 len:54999 00:08:27.238 [2024-10-09 00:15:57.738958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:27.238 [2024-10-09 00:15:57.739009] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:922727405056 len:54999 00:08:27.238 [2024-10-09 00:15:57.739026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:27.238 NEW_FUNC[1/1]: 0x1bfc8d8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:658 00:08:27.238 #41 NEW cov: 12444 ft: 14166 corp: 13/282b lim: 50 exec/s: 0 rss: 74Mb L: 21/32 MS: 1 CopyPart- 00:08:27.238 [2024-10-09 00:15:57.799119] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:15423376088676882134 len:54999 00:08:27.238 [2024-10-09 00:15:57.799146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:27.238 [2024-10-09 00:15:57.799181] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:15423151791727367894 len:54999 00:08:27.238 [2024-10-09 00:15:57.799200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:27.238 #42 NEW cov: 12444 ft: 14240 corp: 14/309b lim: 50 exec/s: 0 rss: 74Mb L: 27/32 MS: 1 CrossOver- 00:08:27.238 [2024-10-09 00:15:57.859271] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:781047800188557014 len:54999 00:08:27.238 [2024-10-09 00:15:57.859299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:27.238 [2024-10-09 00:15:57.859348] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:15480796987348408022 len:54999 00:08:27.238 [2024-10-09 00:15:57.859365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:27.496 #43 NEW cov: 12444 ft: 14268 corp: 15/330b lim: 50 exec/s: 43 rss: 74Mb L: 21/32 MS: 1 ShuffleBytes- 00:08:27.496 [2024-10-09 00:15:57.899647] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069766971391 len:65536 00:08:27.496 [2024-10-09 00:15:57.899675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:27.496 [2024-10-09 00:15:57.899715] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:08:27.496 [2024-10-09 00:15:57.899732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:27.496 [2024-10-09 00:15:57.899782] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:4294967295 len:1 00:08:27.496 [2024-10-09 00:15:57.899798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:27.496 [2024-10-09 00:15:57.899849] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:15420325127720982230 len:215 00:08:27.496 [2024-10-09 00:15:57.899865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:27.496 #44 NEW cov: 12444 ft: 14544 corp: 16/373b lim: 50 exec/s: 44 rss: 74Mb L: 43/43 MS: 1 InsertRepeatedBytes- 00:08:27.496 [2024-10-09 00:15:57.959584] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:15480613721960811222 len:10538 00:08:27.496 [2024-10-09 00:15:57.959611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:27.496 [2024-10-09 00:15:57.959647] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:15480796987365185238 len:54820 00:08:27.496 [2024-10-09 00:15:57.959663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:27.496 #45 NEW cov: 12444 ft: 14555 corp: 17/395b lim: 50 exec/s: 45 rss: 74Mb L: 22/43 MS: 1 ChangeBit- 00:08:27.496 [2024-10-09 00:15:57.999717] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:15480613721960811222 len:10538 00:08:27.496 [2024-10-09 00:15:57.999744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:27.496 [2024-10-09 00:15:57.999780] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:15480796987365185230 len:54820 00:08:27.496 [2024-10-09 00:15:57.999796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:27.496 #46 NEW cov: 12444 ft: 14585 corp: 18/417b lim: 50 exec/s: 46 rss: 75Mb L: 22/43 MS: 1 ChangeBinInt- 00:08:27.496 [2024-10-09 00:15:58.059873] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:15480613721960811222 len:10538 00:08:27.496 [2024-10-09 00:15:58.059908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:27.496 [2024-10-09 00:15:58.059963] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:15480796248630810326 len:10717 00:08:27.496 [2024-10-09 00:15:58.059980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:27.496 #47 NEW cov: 12444 ft: 14599 corp: 19/439b lim: 50 exec/s: 47 rss: 75Mb L: 22/43 MS: 1 ChangeBinInt- 00:08:27.496 [2024-10-09 00:15:58.100001] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:15480613721960811222 len:10538 00:08:27.496 [2024-10-09 00:15:58.100030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:27.496 [2024-10-09 00:15:58.100065] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:15480796248630810326 len:10717 00:08:27.496 [2024-10-09 00:15:58.100088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:27.755 #48 NEW cov: 12444 ft: 14609 corp: 20/461b lim: 50 exec/s: 48 rss: 75Mb L: 22/43 MS: 1 ChangeBit- 00:08:27.755 [2024-10-09 00:15:58.160185] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:15480796983925855787 len:54999 00:08:27.755 [2024-10-09 00:15:58.160214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:27.755 [2024-10-09 00:15:58.160264] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:15480796987348408022 len:54999 00:08:27.755 [2024-10-09 00:15:58.160279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:27.755 #49 NEW cov: 12444 ft: 14665 corp: 21/481b lim: 50 exec/s: 49 rss: 75Mb L: 20/43 MS: 1 ChangeByte- 00:08:27.755 [2024-10-09 00:15:58.220360] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:15480796983925855958 len:54999 00:08:27.755 [2024-10-09 00:15:58.220388] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:27.755 [2024-10-09 00:15:58.220439] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:15480796983925855958 len:54999 00:08:27.755 [2024-10-09 00:15:58.220456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:27.755 #50 NEW cov: 12444 ft: 14677 corp: 22/502b lim: 50 exec/s: 50 rss: 75Mb L: 21/43 MS: 1 EraseBytes- 00:08:27.755 [2024-10-09 00:15:58.260733] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069766971391 len:65536 00:08:27.755 [2024-10-09 00:15:58.260762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:27.755 [2024-10-09 00:15:58.260798] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:08:27.755 [2024-10-09 00:15:58.260818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:27.755 [2024-10-09 00:15:58.260870] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:4294967295 len:1 00:08:27.755 [2024-10-09 00:15:58.260886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:27.755 [2024-10-09 00:15:58.260938] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:15480796983744004096 len:54785 00:08:27.755 [2024-10-09 00:15:58.260953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:27.755 #51 NEW cov: 12444 ft: 14691 corp: 23/549b lim: 50 exec/s: 51 rss: 75Mb L: 47/47 MS: 1 InsertRepeatedBytes- 00:08:27.755 [2024-10-09 00:15:58.320640] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:15423376088676882134 len:54999 00:08:27.755 [2024-10-09 00:15:58.320669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:27.755 [2024-10-09 00:15:58.320711] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:4744215475697145558 len:54999 00:08:27.755 [2024-10-09 00:15:58.320726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:27.755 #52 NEW cov: 12444 ft: 14693 corp: 24/571b lim: 50 exec/s: 52 rss: 75Mb L: 22/47 MS: 1 InsertByte- 00:08:27.755 [2024-10-09 00:15:58.360662] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:15480796983920364246 len:54999 00:08:27.755 [2024-10-09 00:15:58.360689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:27.755 #53 NEW cov: 12444 ft: 15036 corp: 25/585b lim: 50 exec/s: 53 rss: 75Mb L: 14/47 MS: 1 EraseBytes- 00:08:28.014 [2024-10-09 00:15:58.401020] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:12184531672835373355 len:2692 00:08:28.014 [2024-10-09 00:15:58.401047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:28.014 [2024-10-09 00:15:58.401090] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:3470350987147269846 len:54826 00:08:28.014 [2024-10-09 00:15:58.401107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:28.014 [2024-10-09 00:15:58.401160] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:15480796218549262038 len:54999 00:08:28.014 [2024-10-09 00:15:58.401176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:28.014 #54 NEW cov: 12444 ft: 15063 corp: 26/615b lim: 50 exec/s: 54 rss: 75Mb L: 30/47 MS: 1 CMP- DE: "\001\3319+\251\030'\000"- 00:08:28.014 [2024-10-09 00:15:58.461027] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:15480613721960811222 len:10538 00:08:28.014 [2024-10-09 00:15:58.461054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:28.014 [2024-10-09 00:15:58.461089] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:15652589988373517825 len:6184 00:08:28.014 [2024-10-09 00:15:58.461105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:28.014 #55 NEW cov: 12444 ft: 15092 corp: 27/637b lim: 50 exec/s: 55 rss: 75Mb L: 22/47 MS: 1 PersAutoDict- DE: "\001\3319+\251\030'\000"- 00:08:28.014 [2024-10-09 00:15:58.521427] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069766969343 len:65536 00:08:28.014 [2024-10-09 00:15:58.521454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:28.014 [2024-10-09 00:15:58.521519] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:08:28.014 [2024-10-09 00:15:58.521536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:28.014 [2024-10-09 00:15:58.521584] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:4294967295 len:1 00:08:28.014 [2024-10-09 00:15:58.521600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:28.014 [2024-10-09 00:15:58.521657] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:15420325127720982230 len:215 00:08:28.014 [2024-10-09 00:15:58.521673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:28.014 #56 NEW cov: 12444 ft: 15166 corp: 28/680b lim: 50 exec/s: 56 rss: 75Mb L: 43/47 MS: 1 ChangeBit- 00:08:28.014 [2024-10-09 00:15:58.561440] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:3146072370645686585 len:132 00:08:28.014 [2024-10-09 00:15:58.561467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:28.014 [2024-10-09 00:15:58.561502] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:3470350244117927638 len:55255 00:08:28.014 [2024-10-09 00:15:58.561518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:28.014 [2024-10-09 00:15:58.561567] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:15432193101612373718 len:11223 00:08:28.015 [2024-10-09 00:15:58.561582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:28.015 [2024-10-09 00:15:58.621580] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:3146072370645686585 len:132 00:08:28.015 [2024-10-09 00:15:58.621606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:28.015 [2024-10-09 00:15:58.621653] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:3470350244117927638 len:54231 00:08:28.015 [2024-10-09 00:15:58.621669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:28.015 [2024-10-09 00:15:58.621720] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:15432193101612373718 len:11223 00:08:28.015 [2024-10-09 00:15:58.621738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:28.015 #58 NEW cov: 12444 ft: 15203 corp: 29/710b lim: 50 exec/s: 58 rss: 75Mb L: 30/47 MS: 2 PersAutoDict-ChangeBit- DE: "\001\3319+\251\030'\000"- 00:08:28.273 [2024-10-09 00:15:58.661560] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:15480613721960811222 len:10538 00:08:28.273 [2024-10-09 00:15:58.661586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:28.273 [2024-10-09 00:15:58.661634] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:15636497909851543041 len:1 00:08:28.273 [2024-10-09 00:15:58.661650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:28.273 #59 NEW cov: 12444 ft: 15220 corp: 30/738b lim: 50 exec/s: 59 rss: 75Mb L: 28/47 MS: 1 InsertRepeatedBytes- 00:08:28.273 [2024-10-09 00:15:58.721715] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:15480600171338992342 len:54999 00:08:28.273 [2024-10-09 00:15:58.721741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:28.273 [2024-10-09 00:15:58.721775] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:15552854578589935913 len:54999 00:08:28.273 [2024-10-09 00:15:58.721791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:28.273 #60 NEW cov: 12444 ft: 15273 corp: 31/764b lim: 50 exec/s: 60 rss: 75Mb L: 26/47 MS: 1 CopyPart- 00:08:28.273 [2024-10-09 00:15:58.761868] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:15423376088676882134 len:54999 00:08:28.273 [2024-10-09 00:15:58.761895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:28.273 [2024-10-09 00:15:58.761932] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:4744003450341611222 len:54999 00:08:28.273 [2024-10-09 00:15:58.761946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:28.273 #61 NEW cov: 12444 ft: 15277 corp: 32/786b lim: 50 exec/s: 61 rss: 75Mb L: 22/47 MS: 1 ChangeBinInt- 00:08:28.273 [2024-10-09 00:15:58.822021] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:15492337457965501142 len:54999 00:08:28.273 [2024-10-09 00:15:58.822048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:28.273 [2024-10-09 00:15:58.822085] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:15480796987348408022 len:54999 00:08:28.273 [2024-10-09 00:15:58.822100] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:28.273 #62 NEW cov: 12444 ft: 15283 corp: 33/809b lim: 50 exec/s: 62 rss: 75Mb L: 23/47 MS: 1 InsertByte- 00:08:28.273 [2024-10-09 00:15:58.862092] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:15423376088676882134 len:54999 00:08:28.274 [2024-10-09 00:15:58.862119] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:28.274 [2024-10-09 00:15:58.862152] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:15480796987348408022 len:54999 00:08:28.274 [2024-10-09 00:15:58.862168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:28.274 #63 NEW cov: 12444 ft: 15295 corp: 34/830b lim: 50 exec/s: 31 rss: 75Mb L: 21/47 MS: 1 ShuffleBytes- 00:08:28.274 #63 DONE cov: 12444 ft: 15295 corp: 34/830b lim: 50 exec/s: 31 rss: 75Mb 00:08:28.274 ###### Recommended dictionary. ###### 00:08:28.274 "\001\3319+\251\030'\000" # Uses: 2 00:08:28.274 ###### End of recommended dictionary. ###### 00:08:28.274 Done 63 runs in 2 second(s) 00:08:28.532 00:15:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_19.conf /var/tmp/suppress_nvmf_fuzz 00:08:28.532 00:15:59 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:28.532 00:15:59 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:28.532 00:15:59 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 20 1 0x1 00:08:28.532 00:15:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=20 00:08:28.532 00:15:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:28.532 00:15:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:28.532 00:15:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:08:28.532 00:15:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_20.conf 00:08:28.532 00:15:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:28.532 00:15:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:28.532 00:15:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 20 00:08:28.532 00:15:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4420 00:08:28.532 00:15:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:08:28.532 00:15:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' 00:08:28.532 00:15:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4420"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:28.532 00:15:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:28.532 00:15:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:28.532 00:15:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' -c /tmp/fuzz_json_20.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 -Z 20 00:08:28.532 [2024-10-09 00:15:59.090262] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:08:28.532 [2024-10-09 00:15:59.090329] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3892347 ] 00:08:28.804 [2024-10-09 00:15:59.288071] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.804 [2024-10-09 00:15:59.362251] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.804 [2024-10-09 00:15:59.421600] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:29.063 [2024-10-09 00:15:59.437871] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:08:29.063 INFO: Running with entropic power schedule (0xFF, 100). 00:08:29.063 INFO: Seed: 351405450 00:08:29.063 INFO: Loaded 1 modules (384346 inline 8-bit counters): 384346 [0x2bec84c, 0x2c4a5a6), 00:08:29.063 INFO: Loaded 1 PC tables (384346 PCs): 384346 [0x2c4a5a8,0x3227b48), 00:08:29.063 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:08:29.063 INFO: A corpus is not provided, starting from an empty corpus 00:08:29.063 #2 INITED exec/s: 0 rss: 66Mb 00:08:29.063 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:29.063 This may also happen if the target rejected all inputs we tried so far 00:08:29.063 [2024-10-09 00:15:59.508066] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:29.063 [2024-10-09 00:15:59.508106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:29.063 [2024-10-09 00:15:59.508214] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:29.063 [2024-10-09 00:15:59.508235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:29.321 NEW_FUNC[1/716]: 0x45def8 in fuzz_nvm_reservation_acquire_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:597 00:08:29.321 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:29.321 #17 NEW cov: 12275 ft: 12276 corp: 2/53b lim: 90 exec/s: 0 rss: 74Mb L: 52/52 MS: 5 InsertByte-CopyPart-ChangeBinInt-CrossOver-InsertRepeatedBytes- 00:08:29.321 [2024-10-09 00:15:59.848685] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:29.321 [2024-10-09 00:15:59.848727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:29.321 #20 NEW cov: 12388 ft: 13518 corp: 3/71b lim: 90 exec/s: 0 rss: 74Mb L: 18/52 MS: 3 ChangeBit-ChangeBit-InsertRepeatedBytes- 00:08:29.321 [2024-10-09 00:15:59.899128] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:29.321 [2024-10-09 00:15:59.899158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:29.321 [2024-10-09 00:15:59.899222] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:29.321 [2024-10-09 00:15:59.899242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:29.321 #21 NEW cov: 12394 ft: 13822 corp: 4/124b lim: 90 exec/s: 0 rss: 74Mb L: 53/53 MS: 1 InsertByte- 00:08:29.580 [2024-10-09 00:15:59.969777] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:29.580 [2024-10-09 00:15:59.969808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:29.580 [2024-10-09 00:15:59.969905] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:29.580 [2024-10-09 00:15:59.969925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:29.580 [2024-10-09 00:15:59.970014] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:29.580 [2024-10-09 00:15:59.970036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:29.580 #25 NEW cov: 12479 ft: 14406 corp: 5/190b lim: 90 exec/s: 0 rss: 74Mb L: 66/66 MS: 4 ChangeByte-ChangeByte-ShuffleBytes-InsertRepeatedBytes- 00:08:29.580 [2024-10-09 00:16:00.019921] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:29.580 [2024-10-09 00:16:00.019951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:29.580 [2024-10-09 00:16:00.020024] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:29.580 [2024-10-09 00:16:00.020042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:29.580 [2024-10-09 00:16:00.020122] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:29.580 [2024-10-09 00:16:00.020139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:29.580 #31 NEW cov: 12479 ft: 14486 corp: 6/250b lim: 90 exec/s: 0 rss: 74Mb L: 60/66 MS: 1 InsertRepeatedBytes- 00:08:29.580 [2024-10-09 00:16:00.090058] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:29.580 [2024-10-09 00:16:00.090089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:29.580 [2024-10-09 00:16:00.090143] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:29.580 [2024-10-09 00:16:00.090161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:29.580 #32 NEW cov: 12479 ft: 14567 corp: 7/302b lim: 90 exec/s: 0 rss: 74Mb L: 52/66 MS: 1 ChangeBinInt- 00:08:29.580 [2024-10-09 00:16:00.139851] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:29.580 [2024-10-09 00:16:00.139890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:29.580 #33 NEW cov: 12479 ft: 14614 corp: 8/333b lim: 90 exec/s: 0 rss: 74Mb L: 31/66 MS: 1 CopyPart- 00:08:29.580 [2024-10-09 00:16:00.191233] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:29.580 [2024-10-09 00:16:00.191262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:29.580 [2024-10-09 00:16:00.191353] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:29.580 [2024-10-09 00:16:00.191370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:29.580 [2024-10-09 00:16:00.191452] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:29.580 [2024-10-09 00:16:00.191471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:29.580 [2024-10-09 00:16:00.191575] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:29.580 [2024-10-09 00:16:00.191596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:29.839 #34 NEW cov: 12479 ft: 14978 corp: 9/409b lim: 90 exec/s: 0 rss: 74Mb L: 76/76 MS: 1 InsertRepeatedBytes- 00:08:29.839 [2024-10-09 00:16:00.241199] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:29.839 [2024-10-09 00:16:00.241228] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:29.839 [2024-10-09 00:16:00.241308] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:29.839 [2024-10-09 00:16:00.241328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:29.839 [2024-10-09 00:16:00.241405] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:29.839 [2024-10-09 00:16:00.241423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:29.839 #35 NEW cov: 12479 ft: 15036 corp: 10/469b lim: 90 exec/s: 0 rss: 74Mb L: 60/76 MS: 1 ChangeBit- 00:08:29.839 [2024-10-09 00:16:00.311515] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:29.839 [2024-10-09 00:16:00.311545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:29.839 [2024-10-09 00:16:00.311623] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:29.839 [2024-10-09 00:16:00.311642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:29.839 [2024-10-09 00:16:00.311729] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:29.839 [2024-10-09 00:16:00.311746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:29.839 #36 NEW cov: 12479 ft: 15076 corp: 11/537b lim: 90 exec/s: 0 rss: 74Mb L: 68/76 MS: 1 CopyPart- 00:08:29.839 [2024-10-09 00:16:00.361444] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:29.839 [2024-10-09 00:16:00.361473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:29.839 [2024-10-09 00:16:00.361536] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:29.839 [2024-10-09 00:16:00.361556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:29.839 NEW_FUNC[1/1]: 0x1bfc8d8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:658 00:08:29.839 #37 NEW cov: 12502 ft: 15154 corp: 12/589b lim: 90 exec/s: 0 rss: 74Mb L: 52/76 MS: 1 ShuffleBytes- 00:08:29.839 [2024-10-09 00:16:00.411950] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:29.839 [2024-10-09 00:16:00.411981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:29.839 [2024-10-09 00:16:00.412046] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:29.839 [2024-10-09 00:16:00.412064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:29.839 [2024-10-09 00:16:00.412174] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:29.839 [2024-10-09 00:16:00.412194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:29.839 #38 NEW cov: 12502 ft: 15175 corp: 13/655b lim: 90 exec/s: 0 rss: 74Mb L: 66/76 MS: 1 InsertRepeatedBytes- 00:08:30.097 [2024-10-09 00:16:00.481810] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:30.097 [2024-10-09 00:16:00.481846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:30.097 [2024-10-09 00:16:00.481917] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:30.097 [2024-10-09 00:16:00.481934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:30.097 #39 NEW cov: 12502 ft: 15186 corp: 14/708b lim: 90 exec/s: 39 rss: 74Mb L: 53/76 MS: 1 InsertByte- 00:08:30.097 [2024-10-09 00:16:00.552476] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:30.097 [2024-10-09 00:16:00.552506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:30.097 [2024-10-09 00:16:00.552581] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:30.097 [2024-10-09 00:16:00.552598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:30.097 [2024-10-09 00:16:00.552674] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:30.097 [2024-10-09 00:16:00.552694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:30.097 #40 NEW cov: 12502 ft: 15226 corp: 15/774b lim: 90 exec/s: 40 rss: 74Mb L: 66/76 MS: 1 ChangeBit- 00:08:30.097 [2024-10-09 00:16:00.622662] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:30.097 [2024-10-09 00:16:00.622694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:30.097 [2024-10-09 00:16:00.622788] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:30.097 [2024-10-09 00:16:00.622809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:30.097 [2024-10-09 00:16:00.622891] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:30.097 [2024-10-09 00:16:00.622912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:30.097 #41 NEW cov: 12502 ft: 15243 corp: 16/834b lim: 90 exec/s: 41 rss: 74Mb L: 60/76 MS: 1 ChangeByte- 00:08:30.097 [2024-10-09 00:16:00.672869] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:30.097 [2024-10-09 00:16:00.672903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:30.097 [2024-10-09 00:16:00.672983] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:30.097 [2024-10-09 00:16:00.673002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:30.097 [2024-10-09 00:16:00.673086] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:30.097 [2024-10-09 00:16:00.673104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:30.097 #42 NEW cov: 12502 ft: 15259 corp: 17/894b lim: 90 exec/s: 42 rss: 74Mb L: 60/76 MS: 1 ShuffleBytes- 00:08:30.364 [2024-10-09 00:16:00.743616] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:30.364 [2024-10-09 00:16:00.743649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:30.364 [2024-10-09 00:16:00.743712] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:30.364 [2024-10-09 00:16:00.743730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:30.364 [2024-10-09 00:16:00.743823] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:30.364 [2024-10-09 00:16:00.743842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:30.364 [2024-10-09 00:16:00.743936] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:30.364 [2024-10-09 00:16:00.743952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:30.364 #43 NEW cov: 12502 ft: 15302 corp: 18/966b lim: 90 exec/s: 43 rss: 75Mb L: 72/76 MS: 1 InsertRepeatedBytes- 00:08:30.364 [2024-10-09 00:16:00.813927] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:30.364 [2024-10-09 00:16:00.813958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:30.364 [2024-10-09 00:16:00.814028] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:30.364 [2024-10-09 00:16:00.814046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:30.364 [2024-10-09 00:16:00.814123] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:30.364 [2024-10-09 00:16:00.814139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:30.364 [2024-10-09 00:16:00.814232] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:30.364 [2024-10-09 00:16:00.814249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:30.364 #44 NEW cov: 12502 ft: 15329 corp: 19/1050b lim: 90 exec/s: 44 rss: 75Mb L: 84/84 MS: 1 CrossOver- 00:08:30.364 [2024-10-09 00:16:00.883751] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:30.364 [2024-10-09 00:16:00.883783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:30.364 [2024-10-09 00:16:00.883851] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:30.364 [2024-10-09 00:16:00.883869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:30.365 [2024-10-09 00:16:00.883954] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:30.365 [2024-10-09 00:16:00.883970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:30.365 #45 NEW cov: 12502 ft: 15358 corp: 20/1110b lim: 90 exec/s: 45 rss: 75Mb L: 60/84 MS: 1 ChangeByte- 00:08:30.365 [2024-10-09 00:16:00.933704] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:30.365 [2024-10-09 00:16:00.933734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:30.365 [2024-10-09 00:16:00.933798] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:30.365 [2024-10-09 00:16:00.933820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:30.365 #46 NEW cov: 12502 ft: 15376 corp: 21/1163b lim: 90 exec/s: 46 rss: 75Mb L: 53/84 MS: 1 InsertByte- 00:08:30.365 [2024-10-09 00:16:00.983972] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:30.365 [2024-10-09 00:16:00.984007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:30.365 [2024-10-09 00:16:00.984109] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:30.365 [2024-10-09 00:16:00.984128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:30.625 #47 NEW cov: 12502 ft: 15409 corp: 22/1202b lim: 90 exec/s: 47 rss: 75Mb L: 39/84 MS: 1 EraseBytes- 00:08:30.625 [2024-10-09 00:16:01.054379] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:30.625 [2024-10-09 00:16:01.054408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:30.625 [2024-10-09 00:16:01.054477] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:30.625 [2024-10-09 00:16:01.054499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:30.625 [2024-10-09 00:16:01.054577] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:30.625 [2024-10-09 00:16:01.054595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:30.625 #53 NEW cov: 12502 ft: 15422 corp: 23/1272b lim: 90 exec/s: 53 rss: 75Mb L: 70/84 MS: 1 CopyPart- 00:08:30.625 [2024-10-09 00:16:01.104275] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:30.625 [2024-10-09 00:16:01.104305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:30.625 [2024-10-09 00:16:01.104368] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:30.625 [2024-10-09 00:16:01.104387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:30.625 #54 NEW cov: 12502 ft: 15500 corp: 24/1325b lim: 90 exec/s: 54 rss: 75Mb L: 53/84 MS: 1 ChangeBinInt- 00:08:30.625 [2024-10-09 00:16:01.174616] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:30.625 [2024-10-09 00:16:01.174645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:30.625 [2024-10-09 00:16:01.174743] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:30.625 [2024-10-09 00:16:01.174760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:30.625 #55 NEW cov: 12502 ft: 15519 corp: 25/1377b lim: 90 exec/s: 55 rss: 75Mb L: 52/84 MS: 1 ShuffleBytes- 00:08:30.625 [2024-10-09 00:16:01.225037] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:30.625 [2024-10-09 00:16:01.225065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:30.625 [2024-10-09 00:16:01.225135] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:30.625 [2024-10-09 00:16:01.225153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:30.625 [2024-10-09 00:16:01.225260] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:30.625 [2024-10-09 00:16:01.225279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:30.884 #56 NEW cov: 12502 ft: 15535 corp: 26/1443b lim: 90 exec/s: 56 rss: 75Mb L: 66/84 MS: 1 ChangeByte- 00:08:30.884 [2024-10-09 00:16:01.294940] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:30.884 [2024-10-09 00:16:01.294973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:30.884 [2024-10-09 00:16:01.295072] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:30.884 [2024-10-09 00:16:01.295094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:30.884 #57 NEW cov: 12502 ft: 15551 corp: 27/1495b lim: 90 exec/s: 57 rss: 75Mb L: 52/84 MS: 1 ChangeBit- 00:08:30.884 [2024-10-09 00:16:01.365893] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:30.884 [2024-10-09 00:16:01.365921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:30.884 [2024-10-09 00:16:01.365997] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:30.884 [2024-10-09 00:16:01.366014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:30.884 [2024-10-09 00:16:01.366098] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:30.884 [2024-10-09 00:16:01.366115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:30.884 [2024-10-09 00:16:01.366209] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:30.884 [2024-10-09 00:16:01.366227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:30.884 #58 NEW cov: 12502 ft: 15586 corp: 28/1572b lim: 90 exec/s: 58 rss: 75Mb L: 77/84 MS: 1 InsertByte- 00:08:30.884 [2024-10-09 00:16:01.415439] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:30.884 [2024-10-09 00:16:01.415467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:30.884 [2024-10-09 00:16:01.415529] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:30.884 [2024-10-09 00:16:01.415547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:30.884 #59 NEW cov: 12502 ft: 15594 corp: 29/1624b lim: 90 exec/s: 59 rss: 75Mb L: 52/84 MS: 1 CMP- DE: "\000\000\000\000"- 00:08:30.884 [2024-10-09 00:16:01.466052] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:30.884 [2024-10-09 00:16:01.466080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:30.884 [2024-10-09 00:16:01.466176] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:30.884 [2024-10-09 00:16:01.466194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:30.884 [2024-10-09 00:16:01.466278] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:30.884 [2024-10-09 00:16:01.466296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:30.884 #60 NEW cov: 12502 ft: 15610 corp: 30/1689b lim: 90 exec/s: 30 rss: 75Mb L: 65/84 MS: 1 CrossOver- 00:08:30.884 #60 DONE cov: 12502 ft: 15610 corp: 30/1689b lim: 90 exec/s: 30 rss: 75Mb 00:08:30.884 ###### Recommended dictionary. ###### 00:08:30.884 "\000\000\000\000" # Uses: 0 00:08:30.884 ###### End of recommended dictionary. ###### 00:08:30.884 Done 60 runs in 2 second(s) 00:08:31.148 00:16:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_20.conf /var/tmp/suppress_nvmf_fuzz 00:08:31.148 00:16:01 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:31.148 00:16:01 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:31.148 00:16:01 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 21 1 0x1 00:08:31.148 00:16:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=21 00:08:31.148 00:16:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:31.148 00:16:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:31.148 00:16:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:08:31.148 00:16:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_21.conf 00:08:31.148 00:16:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:31.148 00:16:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:31.148 00:16:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 21 00:08:31.148 00:16:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4421 00:08:31.148 00:16:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:08:31.148 00:16:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' 00:08:31.148 00:16:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4421"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:31.148 00:16:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:31.148 00:16:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:31.148 00:16:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' -c /tmp/fuzz_json_21.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 -Z 21 00:08:31.148 [2024-10-09 00:16:01.687657] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:08:31.148 [2024-10-09 00:16:01.687733] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3892703 ] 00:08:31.412 [2024-10-09 00:16:01.888126] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.412 [2024-10-09 00:16:01.961312] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.412 [2024-10-09 00:16:02.020965] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:31.412 [2024-10-09 00:16:02.037186] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4421 *** 00:08:31.669 INFO: Running with entropic power schedule (0xFF, 100). 00:08:31.669 INFO: Seed: 2951382483 00:08:31.669 INFO: Loaded 1 modules (384346 inline 8-bit counters): 384346 [0x2bec84c, 0x2c4a5a6), 00:08:31.669 INFO: Loaded 1 PC tables (384346 PCs): 384346 [0x2c4a5a8,0x3227b48), 00:08:31.669 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:08:31.669 INFO: A corpus is not provided, starting from an empty corpus 00:08:31.669 #2 INITED exec/s: 0 rss: 66Mb 00:08:31.669 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:31.669 This may also happen if the target rejected all inputs we tried so far 00:08:31.669 [2024-10-09 00:16:02.086803] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:31.669 [2024-10-09 00:16:02.086840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:31.669 [2024-10-09 00:16:02.086878] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:31.669 [2024-10-09 00:16:02.086895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:31.669 [2024-10-09 00:16:02.086953] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:31.669 [2024-10-09 00:16:02.086973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:31.669 [2024-10-09 00:16:02.087031] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:31.669 [2024-10-09 00:16:02.087047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:31.927 NEW_FUNC[1/716]: 0x461128 in fuzz_nvm_reservation_release_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:623 00:08:31.927 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:31.927 #4 NEW cov: 12250 ft: 12249 corp: 2/43b lim: 50 exec/s: 0 rss: 73Mb L: 42/42 MS: 2 CopyPart-InsertRepeatedBytes- 00:08:31.927 [2024-10-09 00:16:02.417096] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:31.927 [2024-10-09 00:16:02.417131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:31.927 #15 NEW cov: 12363 ft: 13658 corp: 3/57b lim: 50 exec/s: 0 rss: 73Mb L: 14/42 MS: 1 InsertRepeatedBytes- 00:08:31.927 [2024-10-09 00:16:02.457174] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:31.927 [2024-10-09 00:16:02.457202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:31.927 #16 NEW cov: 12369 ft: 13960 corp: 4/71b lim: 50 exec/s: 0 rss: 73Mb L: 14/42 MS: 1 ChangeBit- 00:08:31.927 [2024-10-09 00:16:02.517738] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:31.927 [2024-10-09 00:16:02.517765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:31.927 [2024-10-09 00:16:02.517820] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:31.927 [2024-10-09 00:16:02.517835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:31.927 [2024-10-09 00:16:02.517885] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:31.927 [2024-10-09 00:16:02.517901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:31.928 [2024-10-09 00:16:02.517952] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:31.928 [2024-10-09 00:16:02.517968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:31.928 #17 NEW cov: 12454 ft: 14171 corp: 5/112b lim: 50 exec/s: 0 rss: 73Mb L: 41/42 MS: 1 EraseBytes- 00:08:32.186 [2024-10-09 00:16:02.577623] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:32.186 [2024-10-09 00:16:02.577649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.186 [2024-10-09 00:16:02.577685] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:32.186 [2024-10-09 00:16:02.577701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:32.186 #18 NEW cov: 12454 ft: 14619 corp: 6/136b lim: 50 exec/s: 0 rss: 74Mb L: 24/42 MS: 1 InsertRepeatedBytes- 00:08:32.186 [2024-10-09 00:16:02.637663] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:32.186 [2024-10-09 00:16:02.637690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.186 #19 NEW cov: 12454 ft: 14702 corp: 7/154b lim: 50 exec/s: 0 rss: 74Mb L: 18/42 MS: 1 EraseBytes- 00:08:32.186 [2024-10-09 00:16:02.697848] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:32.186 [2024-10-09 00:16:02.697875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.186 #20 NEW cov: 12454 ft: 14785 corp: 8/172b lim: 50 exec/s: 0 rss: 74Mb L: 18/42 MS: 1 CopyPart- 00:08:32.186 [2024-10-09 00:16:02.758397] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:32.186 [2024-10-09 00:16:02.758423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.186 [2024-10-09 00:16:02.758468] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:32.186 [2024-10-09 00:16:02.758484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:32.186 [2024-10-09 00:16:02.758535] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:32.186 [2024-10-09 00:16:02.758551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:32.186 [2024-10-09 00:16:02.758602] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:32.186 [2024-10-09 00:16:02.758618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:32.186 #21 NEW cov: 12454 ft: 14794 corp: 9/213b lim: 50 exec/s: 0 rss: 74Mb L: 41/42 MS: 1 ChangeByte- 00:08:32.186 [2024-10-09 00:16:02.818334] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:32.186 [2024-10-09 00:16:02.818360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.186 [2024-10-09 00:16:02.818395] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:32.186 [2024-10-09 00:16:02.818410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:32.445 #22 NEW cov: 12454 ft: 14891 corp: 10/237b lim: 50 exec/s: 0 rss: 74Mb L: 24/42 MS: 1 ShuffleBytes- 00:08:32.445 [2024-10-09 00:16:02.858234] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:32.445 [2024-10-09 00:16:02.858260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.445 #23 NEW cov: 12454 ft: 14997 corp: 11/252b lim: 50 exec/s: 0 rss: 74Mb L: 15/42 MS: 1 InsertByte- 00:08:32.445 [2024-10-09 00:16:02.898492] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:32.445 [2024-10-09 00:16:02.898517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.445 [2024-10-09 00:16:02.898554] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:32.445 [2024-10-09 00:16:02.898570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:32.445 #27 NEW cov: 12454 ft: 15042 corp: 12/279b lim: 50 exec/s: 0 rss: 74Mb L: 27/42 MS: 4 CopyPart-ShuffleBytes-ChangeBit-InsertRepeatedBytes- 00:08:32.445 [2024-10-09 00:16:02.938443] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:32.445 [2024-10-09 00:16:02.938469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.445 NEW_FUNC[1/1]: 0x1bfc8d8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:658 00:08:32.445 #28 NEW cov: 12477 ft: 15105 corp: 13/297b lim: 50 exec/s: 0 rss: 74Mb L: 18/42 MS: 1 ChangeBinInt- 00:08:32.445 [2024-10-09 00:16:02.998646] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:32.445 [2024-10-09 00:16:02.998678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.445 #29 NEW cov: 12477 ft: 15125 corp: 14/314b lim: 50 exec/s: 0 rss: 74Mb L: 17/42 MS: 1 EraseBytes- 00:08:32.445 [2024-10-09 00:16:03.038718] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:32.445 [2024-10-09 00:16:03.038745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.703 #30 NEW cov: 12477 ft: 15155 corp: 15/325b lim: 50 exec/s: 30 rss: 74Mb L: 11/42 MS: 1 EraseBytes- 00:08:32.703 [2024-10-09 00:16:03.098965] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:32.703 [2024-10-09 00:16:03.098991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.703 #31 NEW cov: 12477 ft: 15217 corp: 16/343b lim: 50 exec/s: 31 rss: 74Mb L: 18/42 MS: 1 ChangeByte- 00:08:32.703 [2024-10-09 00:16:03.139423] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:32.703 [2024-10-09 00:16:03.139449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.703 [2024-10-09 00:16:03.139500] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:32.703 [2024-10-09 00:16:03.139515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:32.703 [2024-10-09 00:16:03.139566] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:32.703 [2024-10-09 00:16:03.139582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:32.703 [2024-10-09 00:16:03.139633] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:32.703 [2024-10-09 00:16:03.139649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:32.703 #32 NEW cov: 12477 ft: 15269 corp: 17/384b lim: 50 exec/s: 32 rss: 74Mb L: 41/42 MS: 1 ChangeBinInt- 00:08:32.703 [2024-10-09 00:16:03.179406] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:32.703 [2024-10-09 00:16:03.179433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.703 [2024-10-09 00:16:03.179468] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:32.703 [2024-10-09 00:16:03.179483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:32.703 [2024-10-09 00:16:03.179535] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:32.703 [2024-10-09 00:16:03.179550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:32.703 #33 NEW cov: 12477 ft: 15604 corp: 18/414b lim: 50 exec/s: 33 rss: 74Mb L: 30/42 MS: 1 InsertRepeatedBytes- 00:08:32.703 [2024-10-09 00:16:03.219660] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:32.703 [2024-10-09 00:16:03.219686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.703 [2024-10-09 00:16:03.219739] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:32.703 [2024-10-09 00:16:03.219754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:32.703 [2024-10-09 00:16:03.219805] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:32.703 [2024-10-09 00:16:03.219831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:32.703 [2024-10-09 00:16:03.219882] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:32.703 [2024-10-09 00:16:03.219898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:32.703 #34 NEW cov: 12477 ft: 15659 corp: 19/455b lim: 50 exec/s: 34 rss: 74Mb L: 41/42 MS: 1 ChangeByte- 00:08:32.703 [2024-10-09 00:16:03.259324] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:32.703 [2024-10-09 00:16:03.259350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.703 #35 NEW cov: 12477 ft: 15747 corp: 20/473b lim: 50 exec/s: 35 rss: 74Mb L: 18/42 MS: 1 CopyPart- 00:08:32.703 [2024-10-09 00:16:03.319657] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:32.703 [2024-10-09 00:16:03.319683] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.703 [2024-10-09 00:16:03.319720] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:32.703 [2024-10-09 00:16:03.319736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:32.961 #36 NEW cov: 12477 ft: 15787 corp: 21/497b lim: 50 exec/s: 36 rss: 74Mb L: 24/42 MS: 1 CopyPart- 00:08:32.961 [2024-10-09 00:16:03.379734] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:32.962 [2024-10-09 00:16:03.379761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.962 #37 NEW cov: 12477 ft: 15809 corp: 22/515b lim: 50 exec/s: 37 rss: 74Mb L: 18/42 MS: 1 ChangeBit- 00:08:32.962 [2024-10-09 00:16:03.419773] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:32.962 [2024-10-09 00:16:03.419799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.962 #38 NEW cov: 12477 ft: 15857 corp: 23/532b lim: 50 exec/s: 38 rss: 74Mb L: 17/42 MS: 1 ChangeBit- 00:08:32.962 [2024-10-09 00:16:03.460202] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:32.962 [2024-10-09 00:16:03.460228] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.962 [2024-10-09 00:16:03.460270] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:32.962 [2024-10-09 00:16:03.460286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:32.962 [2024-10-09 00:16:03.460339] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:32.962 [2024-10-09 00:16:03.460355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:32.962 #39 NEW cov: 12477 ft: 15873 corp: 24/571b lim: 50 exec/s: 39 rss: 74Mb L: 39/42 MS: 1 EraseBytes- 00:08:32.962 [2024-10-09 00:16:03.500475] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:32.962 [2024-10-09 00:16:03.500501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.962 [2024-10-09 00:16:03.500548] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:32.962 [2024-10-09 00:16:03.500564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:32.962 [2024-10-09 00:16:03.500614] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:32.962 [2024-10-09 00:16:03.500633] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:32.962 [2024-10-09 00:16:03.500685] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:32.962 [2024-10-09 00:16:03.500701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:32.962 #40 NEW cov: 12477 ft: 15886 corp: 25/616b lim: 50 exec/s: 40 rss: 74Mb L: 45/45 MS: 1 InsertRepeatedBytes- 00:08:32.962 [2024-10-09 00:16:03.560356] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:32.962 [2024-10-09 00:16:03.560383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.962 [2024-10-09 00:16:03.560418] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:32.962 [2024-10-09 00:16:03.560436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:33.219 #41 NEW cov: 12477 ft: 15901 corp: 26/643b lim: 50 exec/s: 41 rss: 74Mb L: 27/45 MS: 1 ChangeByte- 00:08:33.219 [2024-10-09 00:16:03.620519] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:33.219 [2024-10-09 00:16:03.620547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:33.219 [2024-10-09 00:16:03.620581] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:33.219 [2024-10-09 00:16:03.620597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:33.219 #42 NEW cov: 12477 ft: 15933 corp: 27/670b lim: 50 exec/s: 42 rss: 75Mb L: 27/45 MS: 1 ChangeBit- 00:08:33.219 [2024-10-09 00:16:03.660757] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:33.219 [2024-10-09 00:16:03.660784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:33.219 [2024-10-09 00:16:03.660834] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:33.219 [2024-10-09 00:16:03.660850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:33.219 [2024-10-09 00:16:03.660918] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:33.219 [2024-10-09 00:16:03.660932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:33.219 #43 NEW cov: 12477 ft: 15948 corp: 28/700b lim: 50 exec/s: 43 rss: 75Mb L: 30/45 MS: 1 CopyPart- 00:08:33.219 [2024-10-09 00:16:03.721117] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:33.219 [2024-10-09 00:16:03.721143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:33.219 [2024-10-09 00:16:03.721194] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:33.219 [2024-10-09 00:16:03.721209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:33.219 [2024-10-09 00:16:03.721259] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:33.219 [2024-10-09 00:16:03.721275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:33.219 [2024-10-09 00:16:03.721327] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:33.219 [2024-10-09 00:16:03.721342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:33.219 #44 NEW cov: 12477 ft: 15978 corp: 29/741b lim: 50 exec/s: 44 rss: 75Mb L: 41/45 MS: 1 CopyPart- 00:08:33.219 [2024-10-09 00:16:03.781118] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:33.219 [2024-10-09 00:16:03.781143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:33.219 [2024-10-09 00:16:03.781190] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:33.219 [2024-10-09 00:16:03.781205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:33.219 [2024-10-09 00:16:03.781256] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:33.219 [2024-10-09 00:16:03.781270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:33.219 #45 NEW cov: 12477 ft: 15991 corp: 30/771b lim: 50 exec/s: 45 rss: 75Mb L: 30/45 MS: 1 CopyPart- 00:08:33.219 [2024-10-09 00:16:03.840983] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:33.219 [2024-10-09 00:16:03.841008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:33.477 #46 NEW cov: 12477 ft: 15999 corp: 31/785b lim: 50 exec/s: 46 rss: 75Mb L: 14/45 MS: 1 ChangeByte- 00:08:33.477 [2024-10-09 00:16:03.881522] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:33.477 [2024-10-09 00:16:03.881549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:33.477 [2024-10-09 00:16:03.881597] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:33.478 [2024-10-09 00:16:03.881612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:33.478 [2024-10-09 00:16:03.881663] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:33.478 [2024-10-09 00:16:03.881678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:33.478 [2024-10-09 00:16:03.881729] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:33.478 [2024-10-09 00:16:03.881744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:33.478 #47 NEW cov: 12477 ft: 16010 corp: 32/826b lim: 50 exec/s: 47 rss: 75Mb L: 41/45 MS: 1 ChangeBinInt- 00:08:33.478 [2024-10-09 00:16:03.941268] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:33.478 [2024-10-09 00:16:03.941295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:33.478 #48 NEW cov: 12477 ft: 16026 corp: 33/844b lim: 50 exec/s: 48 rss: 75Mb L: 18/45 MS: 1 ShuffleBytes- 00:08:33.478 [2024-10-09 00:16:03.981295] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:33.478 [2024-10-09 00:16:03.981321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:33.478 #49 NEW cov: 12477 ft: 16056 corp: 34/860b lim: 50 exec/s: 49 rss: 75Mb L: 16/45 MS: 1 InsertByte- 00:08:33.478 [2024-10-09 00:16:04.041936] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:33.478 [2024-10-09 00:16:04.041964] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:33.478 [2024-10-09 00:16:04.042008] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:33.478 [2024-10-09 00:16:04.042024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:33.478 [2024-10-09 00:16:04.042079] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:33.478 [2024-10-09 00:16:04.042095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:33.478 [2024-10-09 00:16:04.042145] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:33.478 [2024-10-09 00:16:04.042161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:33.478 #50 NEW cov: 12477 ft: 16089 corp: 35/901b lim: 50 exec/s: 25 rss: 75Mb L: 41/45 MS: 1 ChangeBit- 00:08:33.478 #50 DONE cov: 12477 ft: 16089 corp: 35/901b lim: 50 exec/s: 25 rss: 75Mb 00:08:33.478 Done 50 runs in 2 second(s) 00:08:33.736 00:16:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_21.conf /var/tmp/suppress_nvmf_fuzz 00:08:33.736 00:16:04 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:33.736 00:16:04 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:33.736 00:16:04 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 22 1 0x1 00:08:33.736 00:16:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=22 00:08:33.736 00:16:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:33.736 00:16:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:33.736 00:16:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:08:33.736 00:16:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_22.conf 00:08:33.736 00:16:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:33.736 00:16:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:33.736 00:16:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 22 00:08:33.736 00:16:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4422 00:08:33.736 00:16:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:08:33.736 00:16:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' 00:08:33.736 00:16:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4422"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:33.736 00:16:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:33.736 00:16:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:33.736 00:16:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' -c /tmp/fuzz_json_22.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 -Z 22 00:08:33.736 [2024-10-09 00:16:04.268029] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:08:33.736 [2024-10-09 00:16:04.268100] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3893062 ] 00:08:33.994 [2024-10-09 00:16:04.464329] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.994 [2024-10-09 00:16:04.537190] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.994 [2024-10-09 00:16:04.596267] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:33.994 [2024-10-09 00:16:04.612489] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4422 *** 00:08:33.994 INFO: Running with entropic power schedule (0xFF, 100). 00:08:33.994 INFO: Seed: 1232425280 00:08:34.252 INFO: Loaded 1 modules (384346 inline 8-bit counters): 384346 [0x2bec84c, 0x2c4a5a6), 00:08:34.252 INFO: Loaded 1 PC tables (384346 PCs): 384346 [0x2c4a5a8,0x3227b48), 00:08:34.252 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:08:34.252 INFO: A corpus is not provided, starting from an empty corpus 00:08:34.252 #2 INITED exec/s: 0 rss: 66Mb 00:08:34.252 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:34.252 This may also happen if the target rejected all inputs we tried so far 00:08:34.252 [2024-10-09 00:16:04.657278] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:34.252 [2024-10-09 00:16:04.657313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:34.511 NEW_FUNC[1/716]: 0x4633f8 in fuzz_nvm_reservation_register_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:644 00:08:34.511 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:34.511 #6 NEW cov: 12266 ft: 12246 corp: 2/33b lim: 85 exec/s: 0 rss: 73Mb L: 32/32 MS: 4 CopyPart-InsertByte-ShuffleBytes-InsertRepeatedBytes- 00:08:34.511 [2024-10-09 00:16:05.008166] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:34.511 [2024-10-09 00:16:05.008209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:34.511 #10 NEW cov: 12389 ft: 12803 corp: 3/51b lim: 85 exec/s: 0 rss: 73Mb L: 18/32 MS: 4 CMP-ChangeBinInt-ChangeByte-InsertRepeatedBytes- DE: "F\264\356\335\261\030'\000"- 00:08:34.511 [2024-10-09 00:16:05.068200] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:34.511 [2024-10-09 00:16:05.068233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:34.511 #11 NEW cov: 12395 ft: 13178 corp: 4/70b lim: 85 exec/s: 0 rss: 74Mb L: 19/32 MS: 1 InsertByte- 00:08:34.777 [2024-10-09 00:16:05.158557] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:34.777 [2024-10-09 00:16:05.158588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:34.777 [2024-10-09 00:16:05.158638] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:34.777 [2024-10-09 00:16:05.158655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:34.777 [2024-10-09 00:16:05.158685] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:34.777 [2024-10-09 00:16:05.158702] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:34.777 #12 NEW cov: 12480 ft: 14291 corp: 5/127b lim: 85 exec/s: 0 rss: 74Mb L: 57/57 MS: 1 InsertRepeatedBytes- 00:08:34.777 [2024-10-09 00:16:05.258708] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:34.777 [2024-10-09 00:16:05.258740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:34.777 #13 NEW cov: 12480 ft: 14441 corp: 6/146b lim: 85 exec/s: 0 rss: 74Mb L: 19/57 MS: 1 ShuffleBytes- 00:08:34.777 [2024-10-09 00:16:05.318798] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:34.777 [2024-10-09 00:16:05.318835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:34.777 #14 NEW cov: 12480 ft: 14555 corp: 7/165b lim: 85 exec/s: 0 rss: 74Mb L: 19/57 MS: 1 ChangeByte- 00:08:34.777 [2024-10-09 00:16:05.368941] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:34.778 [2024-10-09 00:16:05.368979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:35.038 #15 NEW cov: 12480 ft: 14640 corp: 8/184b lim: 85 exec/s: 0 rss: 74Mb L: 19/57 MS: 1 ChangeBit- 00:08:35.038 [2024-10-09 00:16:05.459198] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:35.038 [2024-10-09 00:16:05.459228] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:35.038 #16 NEW cov: 12480 ft: 14687 corp: 9/206b lim: 85 exec/s: 0 rss: 74Mb L: 22/57 MS: 1 CopyPart- 00:08:35.038 [2024-10-09 00:16:05.509328] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:35.038 [2024-10-09 00:16:05.509358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:35.038 NEW_FUNC[1/1]: 0x1bfc8d8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:658 00:08:35.038 #17 NEW cov: 12503 ft: 14740 corp: 10/224b lim: 85 exec/s: 0 rss: 74Mb L: 18/57 MS: 1 EraseBytes- 00:08:35.038 [2024-10-09 00:16:05.599527] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:35.038 [2024-10-09 00:16:05.599556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:35.297 #18 NEW cov: 12503 ft: 14816 corp: 11/254b lim: 85 exec/s: 18 rss: 74Mb L: 30/57 MS: 1 PersAutoDict- DE: "F\264\356\335\261\030'\000"- 00:08:35.297 [2024-10-09 00:16:05.689799] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:35.297 [2024-10-09 00:16:05.689835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:35.297 #19 NEW cov: 12503 ft: 14874 corp: 12/279b lim: 85 exec/s: 19 rss: 74Mb L: 25/57 MS: 1 CrossOver- 00:08:35.297 [2024-10-09 00:16:05.739938] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:35.297 [2024-10-09 00:16:05.739971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:35.297 #20 NEW cov: 12503 ft: 14902 corp: 13/303b lim: 85 exec/s: 20 rss: 74Mb L: 24/57 MS: 1 CrossOver- 00:08:35.297 [2024-10-09 00:16:05.790092] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:35.297 [2024-10-09 00:16:05.790121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:35.297 [2024-10-09 00:16:05.790170] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:35.297 [2024-10-09 00:16:05.790188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:35.297 #23 NEW cov: 12503 ft: 15203 corp: 14/338b lim: 85 exec/s: 23 rss: 74Mb L: 35/57 MS: 3 PersAutoDict-InsertByte-CrossOver- DE: "F\264\356\335\261\030'\000"- 00:08:35.297 [2024-10-09 00:16:05.850210] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:35.297 [2024-10-09 00:16:05.850240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:35.297 #24 NEW cov: 12503 ft: 15242 corp: 15/368b lim: 85 exec/s: 24 rss: 74Mb L: 30/57 MS: 1 ChangeByte- 00:08:35.569 [2024-10-09 00:16:05.940485] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:35.569 [2024-10-09 00:16:05.940515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:35.569 #25 NEW cov: 12503 ft: 15248 corp: 16/387b lim: 85 exec/s: 25 rss: 74Mb L: 19/57 MS: 1 ChangeByte- 00:08:35.569 [2024-10-09 00:16:05.990549] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:35.569 [2024-10-09 00:16:05.990579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:35.569 #26 NEW cov: 12503 ft: 15270 corp: 17/405b lim: 85 exec/s: 26 rss: 74Mb L: 18/57 MS: 1 ChangeBinInt- 00:08:35.569 [2024-10-09 00:16:06.050726] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:35.569 [2024-10-09 00:16:06.050758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:35.569 #27 NEW cov: 12503 ft: 15294 corp: 18/424b lim: 85 exec/s: 27 rss: 74Mb L: 19/57 MS: 1 ChangeBinInt- 00:08:35.569 [2024-10-09 00:16:06.141078] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:35.569 [2024-10-09 00:16:06.141110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:35.569 [2024-10-09 00:16:06.141144] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:35.569 [2024-10-09 00:16:06.141161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:35.834 #28 NEW cov: 12503 ft: 15310 corp: 19/462b lim: 85 exec/s: 28 rss: 74Mb L: 38/57 MS: 1 PersAutoDict- DE: "F\264\356\335\261\030'\000"- 00:08:35.834 [2024-10-09 00:16:06.231208] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:35.834 [2024-10-09 00:16:06.231237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:35.834 #29 NEW cov: 12503 ft: 15334 corp: 20/493b lim: 85 exec/s: 29 rss: 74Mb L: 31/57 MS: 1 CopyPart- 00:08:35.834 [2024-10-09 00:16:06.281410] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:35.834 [2024-10-09 00:16:06.281443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:35.834 [2024-10-09 00:16:06.281478] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:35.834 [2024-10-09 00:16:06.281503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:35.834 #30 NEW cov: 12503 ft: 15363 corp: 21/536b lim: 85 exec/s: 30 rss: 74Mb L: 43/57 MS: 1 InsertRepeatedBytes- 00:08:35.834 [2024-10-09 00:16:06.371629] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:35.834 [2024-10-09 00:16:06.371658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:35.834 [2024-10-09 00:16:06.371708] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:35.834 [2024-10-09 00:16:06.371727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:35.834 #31 NEW cov: 12503 ft: 15385 corp: 22/579b lim: 85 exec/s: 31 rss: 74Mb L: 43/57 MS: 1 CMP- DE: "\000'\030\255m09\344"- 00:08:35.834 [2024-10-09 00:16:06.461806] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:35.834 [2024-10-09 00:16:06.461841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:36.093 #32 NEW cov: 12503 ft: 15465 corp: 23/611b lim: 85 exec/s: 32 rss: 75Mb L: 32/57 MS: 1 PersAutoDict- DE: "\000'\030\255m09\344"- 00:08:36.093 [2024-10-09 00:16:06.552084] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:36.093 [2024-10-09 00:16:06.552113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:36.093 #33 NEW cov: 12503 ft: 15470 corp: 24/629b lim: 85 exec/s: 33 rss: 75Mb L: 18/57 MS: 1 ShuffleBytes- 00:08:36.093 [2024-10-09 00:16:06.642334] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:36.093 [2024-10-09 00:16:06.642368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:36.093 #39 NEW cov: 12503 ft: 15479 corp: 25/652b lim: 85 exec/s: 19 rss: 75Mb L: 23/57 MS: 1 CMP- DE: "\365\377\377\377"- 00:08:36.093 #39 DONE cov: 12503 ft: 15479 corp: 25/652b lim: 85 exec/s: 19 rss: 75Mb 00:08:36.093 ###### Recommended dictionary. ###### 00:08:36.093 "F\264\356\335\261\030'\000" # Uses: 3 00:08:36.093 "\000'\030\255m09\344" # Uses: 1 00:08:36.093 "\365\377\377\377" # Uses: 0 00:08:36.093 ###### End of recommended dictionary. ###### 00:08:36.093 Done 39 runs in 2 second(s) 00:08:36.353 00:16:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_22.conf /var/tmp/suppress_nvmf_fuzz 00:08:36.353 00:16:06 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:36.353 00:16:06 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:36.353 00:16:06 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 23 1 0x1 00:08:36.353 00:16:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=23 00:08:36.353 00:16:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:36.353 00:16:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:36.353 00:16:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:08:36.353 00:16:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_23.conf 00:08:36.353 00:16:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:36.353 00:16:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:36.353 00:16:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 23 00:08:36.353 00:16:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4423 00:08:36.353 00:16:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:08:36.353 00:16:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' 00:08:36.353 00:16:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4423"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:36.353 00:16:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:36.353 00:16:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:36.353 00:16:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' -c /tmp/fuzz_json_23.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 -Z 23 00:08:36.353 [2024-10-09 00:16:06.900502] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:08:36.353 [2024-10-09 00:16:06.900571] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3893418 ] 00:08:36.612 [2024-10-09 00:16:07.105932] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.612 [2024-10-09 00:16:07.179248] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.612 [2024-10-09 00:16:07.238270] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:36.871 [2024-10-09 00:16:07.254493] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4423 *** 00:08:36.871 INFO: Running with entropic power schedule (0xFF, 100). 00:08:36.871 INFO: Seed: 3875420341 00:08:36.871 INFO: Loaded 1 modules (384346 inline 8-bit counters): 384346 [0x2bec84c, 0x2c4a5a6), 00:08:36.871 INFO: Loaded 1 PC tables (384346 PCs): 384346 [0x2c4a5a8,0x3227b48), 00:08:36.871 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:08:36.871 INFO: A corpus is not provided, starting from an empty corpus 00:08:36.871 #2 INITED exec/s: 0 rss: 66Mb 00:08:36.871 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:36.871 This may also happen if the target rejected all inputs we tried so far 00:08:36.871 [2024-10-09 00:16:07.309779] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:36.871 [2024-10-09 00:16:07.309810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:37.130 NEW_FUNC[1/715]: 0x466638 in fuzz_nvm_reservation_report_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:671 00:08:37.130 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:37.130 #3 NEW cov: 12209 ft: 12204 corp: 2/10b lim: 25 exec/s: 0 rss: 73Mb L: 9/9 MS: 1 CMP- DE: "\001'\030\255\354Q\323\234"- 00:08:37.130 [2024-10-09 00:16:07.682256] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:37.130 [2024-10-09 00:16:07.682314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:37.130 #4 NEW cov: 12322 ft: 12810 corp: 3/19b lim: 25 exec/s: 0 rss: 73Mb L: 9/9 MS: 1 PersAutoDict- DE: "\001'\030\255\354Q\323\234"- 00:08:37.130 [2024-10-09 00:16:07.752453] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:37.130 [2024-10-09 00:16:07.752491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:37.390 #5 NEW cov: 12328 ft: 13071 corp: 4/28b lim: 25 exec/s: 0 rss: 73Mb L: 9/9 MS: 1 ChangeBinInt- 00:08:37.390 [2024-10-09 00:16:07.823146] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:37.390 [2024-10-09 00:16:07.823175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:37.390 [2024-10-09 00:16:07.823228] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:37.390 [2024-10-09 00:16:07.823245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:37.390 [2024-10-09 00:16:07.823315] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:37.390 [2024-10-09 00:16:07.823332] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:37.390 #6 NEW cov: 12413 ft: 13758 corp: 5/46b lim: 25 exec/s: 0 rss: 73Mb L: 18/18 MS: 1 CrossOver- 00:08:37.390 [2024-10-09 00:16:07.873207] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:37.390 [2024-10-09 00:16:07.873236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:37.390 [2024-10-09 00:16:07.873312] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:37.390 [2024-10-09 00:16:07.873328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:37.390 [2024-10-09 00:16:07.873395] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:37.390 [2024-10-09 00:16:07.873412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:37.390 #7 NEW cov: 12413 ft: 13797 corp: 6/64b lim: 25 exec/s: 0 rss: 73Mb L: 18/18 MS: 1 PersAutoDict- DE: "\001'\030\255\354Q\323\234"- 00:08:37.390 [2024-10-09 00:16:07.942991] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:37.390 [2024-10-09 00:16:07.943020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:37.390 #8 NEW cov: 12413 ft: 13840 corp: 7/73b lim: 25 exec/s: 0 rss: 73Mb L: 9/18 MS: 1 ChangeByte- 00:08:37.390 [2024-10-09 00:16:07.993779] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:37.390 [2024-10-09 00:16:07.993806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:37.390 [2024-10-09 00:16:07.993885] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:37.390 [2024-10-09 00:16:07.993901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:37.390 [2024-10-09 00:16:07.993976] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:37.390 [2024-10-09 00:16:07.993994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:37.390 [2024-10-09 00:16:07.994080] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:37.390 [2024-10-09 00:16:07.994101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:37.390 #10 NEW cov: 12413 ft: 14333 corp: 8/95b lim: 25 exec/s: 0 rss: 73Mb L: 22/22 MS: 2 CrossOver-InsertRepeatedBytes- 00:08:37.649 [2024-10-09 00:16:08.043541] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:37.649 [2024-10-09 00:16:08.043571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:37.649 [2024-10-09 00:16:08.043634] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:37.649 [2024-10-09 00:16:08.043651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:37.649 #11 NEW cov: 12413 ft: 14581 corp: 9/105b lim: 25 exec/s: 0 rss: 74Mb L: 10/22 MS: 1 InsertByte- 00:08:37.649 [2024-10-09 00:16:08.113559] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:37.649 [2024-10-09 00:16:08.113587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:37.649 #12 NEW cov: 12413 ft: 14600 corp: 10/112b lim: 25 exec/s: 0 rss: 74Mb L: 7/22 MS: 1 EraseBytes- 00:08:37.649 [2024-10-09 00:16:08.183969] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:37.649 [2024-10-09 00:16:08.183999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:37.649 [2024-10-09 00:16:08.184088] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:37.649 [2024-10-09 00:16:08.184106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:37.649 NEW_FUNC[1/1]: 0x1bfc8d8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:658 00:08:37.649 #13 NEW cov: 12436 ft: 14674 corp: 11/122b lim: 25 exec/s: 0 rss: 74Mb L: 10/22 MS: 1 ShuffleBytes- 00:08:37.649 [2024-10-09 00:16:08.254465] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:37.649 [2024-10-09 00:16:08.254495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:37.649 [2024-10-09 00:16:08.254575] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:37.649 [2024-10-09 00:16:08.254594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:37.649 [2024-10-09 00:16:08.254659] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:37.649 [2024-10-09 00:16:08.254678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:37.649 #14 NEW cov: 12436 ft: 14732 corp: 12/140b lim: 25 exec/s: 0 rss: 74Mb L: 18/22 MS: 1 InsertRepeatedBytes- 00:08:37.908 [2024-10-09 00:16:08.304700] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:37.908 [2024-10-09 00:16:08.304729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:37.908 [2024-10-09 00:16:08.304792] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:37.908 [2024-10-09 00:16:08.304808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:37.908 [2024-10-09 00:16:08.304889] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:37.908 [2024-10-09 00:16:08.304907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:37.908 #15 NEW cov: 12436 ft: 14740 corp: 13/158b lim: 25 exec/s: 15 rss: 74Mb L: 18/22 MS: 1 InsertRepeatedBytes- 00:08:37.908 [2024-10-09 00:16:08.355089] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:37.908 [2024-10-09 00:16:08.355120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:37.908 [2024-10-09 00:16:08.355221] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:37.908 [2024-10-09 00:16:08.355238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:37.908 [2024-10-09 00:16:08.355322] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:37.908 [2024-10-09 00:16:08.355340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:37.908 [2024-10-09 00:16:08.355429] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:37.908 [2024-10-09 00:16:08.355448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:37.908 #16 NEW cov: 12436 ft: 14774 corp: 14/182b lim: 25 exec/s: 16 rss: 74Mb L: 24/24 MS: 1 InsertRepeatedBytes- 00:08:37.908 [2024-10-09 00:16:08.424572] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:37.908 [2024-10-09 00:16:08.424599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:37.908 #20 NEW cov: 12436 ft: 14869 corp: 15/187b lim: 25 exec/s: 20 rss: 74Mb L: 5/24 MS: 4 InsertByte-ChangeByte-ShuffleBytes-CrossOver- 00:08:37.908 [2024-10-09 00:16:08.475660] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:37.908 [2024-10-09 00:16:08.475688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:37.908 [2024-10-09 00:16:08.475776] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:37.908 [2024-10-09 00:16:08.475792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:37.908 [2024-10-09 00:16:08.475876] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:37.908 [2024-10-09 00:16:08.475892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:37.908 [2024-10-09 00:16:08.475982] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:37.908 [2024-10-09 00:16:08.476001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:37.908 #21 NEW cov: 12436 ft: 14921 corp: 16/211b lim: 25 exec/s: 21 rss: 74Mb L: 24/24 MS: 1 ShuffleBytes- 00:08:38.166 [2024-10-09 00:16:08.545117] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:38.166 [2024-10-09 00:16:08.545146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:38.166 #22 NEW cov: 12436 ft: 14950 corp: 17/217b lim: 25 exec/s: 22 rss: 74Mb L: 6/24 MS: 1 EraseBytes- 00:08:38.166 [2024-10-09 00:16:08.595278] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:38.166 [2024-10-09 00:16:08.595305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:38.166 #23 NEW cov: 12436 ft: 14973 corp: 18/224b lim: 25 exec/s: 23 rss: 74Mb L: 7/24 MS: 1 ShuffleBytes- 00:08:38.166 [2024-10-09 00:16:08.646313] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:38.166 [2024-10-09 00:16:08.646341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:38.166 [2024-10-09 00:16:08.646429] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:38.166 [2024-10-09 00:16:08.646446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:38.166 [2024-10-09 00:16:08.646523] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:38.166 [2024-10-09 00:16:08.646545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:38.166 [2024-10-09 00:16:08.646632] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:38.166 [2024-10-09 00:16:08.646650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:38.166 #24 NEW cov: 12436 ft: 15003 corp: 19/248b lim: 25 exec/s: 24 rss: 74Mb L: 24/24 MS: 1 ShuffleBytes- 00:08:38.166 [2024-10-09 00:16:08.716817] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:38.166 [2024-10-09 00:16:08.716843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:38.166 [2024-10-09 00:16:08.716937] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:38.166 [2024-10-09 00:16:08.716953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:38.166 [2024-10-09 00:16:08.717039] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:38.166 [2024-10-09 00:16:08.717058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:38.166 [2024-10-09 00:16:08.717143] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:38.166 [2024-10-09 00:16:08.717163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:38.166 [2024-10-09 00:16:08.717250] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:08:38.166 [2024-10-09 00:16:08.717269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:38.166 #25 NEW cov: 12436 ft: 15059 corp: 20/273b lim: 25 exec/s: 25 rss: 74Mb L: 25/25 MS: 1 CrossOver- 00:08:38.166 [2024-10-09 00:16:08.786339] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:38.166 [2024-10-09 00:16:08.786369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:38.166 [2024-10-09 00:16:08.786432] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:38.166 [2024-10-09 00:16:08.786455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:38.425 #26 NEW cov: 12436 ft: 15067 corp: 21/283b lim: 25 exec/s: 26 rss: 74Mb L: 10/25 MS: 1 InsertByte- 00:08:38.425 [2024-10-09 00:16:08.836418] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:38.425 [2024-10-09 00:16:08.836447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:38.425 [2024-10-09 00:16:08.836526] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:38.425 [2024-10-09 00:16:08.836559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:38.425 #29 NEW cov: 12436 ft: 15107 corp: 22/294b lim: 25 exec/s: 29 rss: 74Mb L: 11/25 MS: 3 EraseBytes-ChangeBinInt-CMP- DE: "\001\000\000\000\000\000\000\000"- 00:08:38.425 [2024-10-09 00:16:08.906796] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:38.425 [2024-10-09 00:16:08.906832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:38.425 [2024-10-09 00:16:08.906902] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:38.425 [2024-10-09 00:16:08.906923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:38.425 #30 NEW cov: 12436 ft: 15176 corp: 23/306b lim: 25 exec/s: 30 rss: 74Mb L: 12/25 MS: 1 InsertByte- 00:08:38.425 [2024-10-09 00:16:08.976946] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:38.425 [2024-10-09 00:16:08.976980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:38.425 [2024-10-09 00:16:08.977048] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:38.425 [2024-10-09 00:16:08.977067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:38.425 #31 NEW cov: 12436 ft: 15187 corp: 24/317b lim: 25 exec/s: 31 rss: 74Mb L: 11/25 MS: 1 ChangeBinInt- 00:08:38.425 [2024-10-09 00:16:09.027761] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:38.425 [2024-10-09 00:16:09.027791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:38.425 [2024-10-09 00:16:09.027866] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:38.425 [2024-10-09 00:16:09.027883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:38.425 [2024-10-09 00:16:09.027962] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:38.425 [2024-10-09 00:16:09.027982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:38.425 [2024-10-09 00:16:09.028081] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:38.425 [2024-10-09 00:16:09.028103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:38.684 #32 NEW cov: 12436 ft: 15228 corp: 25/339b lim: 25 exec/s: 32 rss: 74Mb L: 22/25 MS: 1 CrossOver- 00:08:38.684 [2024-10-09 00:16:09.097289] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:38.684 [2024-10-09 00:16:09.097319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:38.684 #33 NEW cov: 12436 ft: 15234 corp: 26/348b lim: 25 exec/s: 33 rss: 74Mb L: 9/25 MS: 1 ShuffleBytes- 00:08:38.684 [2024-10-09 00:16:09.148526] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:38.684 [2024-10-09 00:16:09.148554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:38.684 [2024-10-09 00:16:09.148652] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:38.684 [2024-10-09 00:16:09.148670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:38.684 [2024-10-09 00:16:09.148760] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:38.684 [2024-10-09 00:16:09.148782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:38.684 [2024-10-09 00:16:09.148888] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:38.684 [2024-10-09 00:16:09.148907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:38.684 [2024-10-09 00:16:09.149010] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:08:38.684 [2024-10-09 00:16:09.149027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:38.684 #34 NEW cov: 12436 ft: 15239 corp: 27/373b lim: 25 exec/s: 34 rss: 75Mb L: 25/25 MS: 1 ShuffleBytes- 00:08:38.684 [2024-10-09 00:16:09.218582] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:38.684 [2024-10-09 00:16:09.218612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:38.684 [2024-10-09 00:16:09.218705] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:38.684 [2024-10-09 00:16:09.218721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:38.684 [2024-10-09 00:16:09.218790] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:38.684 [2024-10-09 00:16:09.218809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:38.684 [2024-10-09 00:16:09.218902] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:38.684 [2024-10-09 00:16:09.218924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:38.684 #35 NEW cov: 12436 ft: 15278 corp: 28/397b lim: 25 exec/s: 35 rss: 75Mb L: 24/25 MS: 1 CrossOver- 00:08:38.684 [2024-10-09 00:16:09.267972] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:38.684 [2024-10-09 00:16:09.267998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:38.684 #36 NEW cov: 12436 ft: 15289 corp: 29/404b lim: 25 exec/s: 36 rss: 75Mb L: 7/25 MS: 1 ChangeBit- 00:08:38.943 [2024-10-09 00:16:09.319035] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:38.943 [2024-10-09 00:16:09.319064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:38.943 [2024-10-09 00:16:09.319149] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:38.943 [2024-10-09 00:16:09.319165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:38.943 [2024-10-09 00:16:09.319248] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:38.943 [2024-10-09 00:16:09.319266] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:38.943 [2024-10-09 00:16:09.319349] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:38.943 [2024-10-09 00:16:09.319369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:38.943 #37 NEW cov: 12436 ft: 15309 corp: 30/427b lim: 25 exec/s: 18 rss: 75Mb L: 23/25 MS: 1 InsertRepeatedBytes- 00:08:38.943 #37 DONE cov: 12436 ft: 15309 corp: 30/427b lim: 25 exec/s: 18 rss: 75Mb 00:08:38.943 ###### Recommended dictionary. ###### 00:08:38.943 "\001'\030\255\354Q\323\234" # Uses: 2 00:08:38.943 "\001\000\000\000\000\000\000\000" # Uses: 0 00:08:38.943 ###### End of recommended dictionary. ###### 00:08:38.943 Done 37 runs in 2 second(s) 00:08:38.943 00:16:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_23.conf /var/tmp/suppress_nvmf_fuzz 00:08:38.943 00:16:09 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:38.943 00:16:09 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:38.943 00:16:09 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 24 1 0x1 00:08:38.943 00:16:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=24 00:08:38.943 00:16:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:38.943 00:16:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:38.943 00:16:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:08:38.943 00:16:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_24.conf 00:08:38.943 00:16:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:38.943 00:16:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:38.943 00:16:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 24 00:08:38.943 00:16:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4424 00:08:38.943 00:16:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:08:38.943 00:16:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' 00:08:38.943 00:16:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4424"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:38.943 00:16:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:38.943 00:16:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:38.943 00:16:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' -c /tmp/fuzz_json_24.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 -Z 24 00:08:38.944 [2024-10-09 00:16:09.520789] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:08:38.944 [2024-10-09 00:16:09.520861] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3893777 ] 00:08:39.203 [2024-10-09 00:16:09.717145] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.203 [2024-10-09 00:16:09.789731] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.462 [2024-10-09 00:16:09.848694] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:39.462 [2024-10-09 00:16:09.864917] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4424 *** 00:08:39.462 INFO: Running with entropic power schedule (0xFF, 100). 00:08:39.462 INFO: Seed: 2188457364 00:08:39.462 INFO: Loaded 1 modules (384346 inline 8-bit counters): 384346 [0x2bec84c, 0x2c4a5a6), 00:08:39.462 INFO: Loaded 1 PC tables (384346 PCs): 384346 [0x2c4a5a8,0x3227b48), 00:08:39.462 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:08:39.462 INFO: A corpus is not provided, starting from an empty corpus 00:08:39.462 #2 INITED exec/s: 0 rss: 66Mb 00:08:39.462 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:39.462 This may also happen if the target rejected all inputs we tried so far 00:08:39.462 [2024-10-09 00:16:09.912759] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069867569151 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.462 [2024-10-09 00:16:09.912790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:39.462 [2024-10-09 00:16:09.912832] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.462 [2024-10-09 00:16:09.912848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:39.462 [2024-10-09 00:16:09.912902] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.462 [2024-10-09 00:16:09.912917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:39.462 [2024-10-09 00:16:09.912970] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.462 [2024-10-09 00:16:09.912985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:39.722 NEW_FUNC[1/716]: 0x467728 in fuzz_nvm_compare_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:685 00:08:39.722 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:39.722 #44 NEW cov: 12281 ft: 12271 corp: 2/84b lim: 100 exec/s: 0 rss: 73Mb L: 83/83 MS: 2 ChangeBit-InsertRepeatedBytes- 00:08:39.722 [2024-10-09 00:16:10.263805] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069867569151 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.722 [2024-10-09 00:16:10.263868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:39.722 [2024-10-09 00:16:10.263904] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.722 [2024-10-09 00:16:10.263920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:39.722 [2024-10-09 00:16:10.263976] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.722 [2024-10-09 00:16:10.263993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:39.722 [2024-10-09 00:16:10.264048] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.722 [2024-10-09 00:16:10.264063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:39.722 #45 NEW cov: 12394 ft: 12832 corp: 3/167b lim: 100 exec/s: 0 rss: 73Mb L: 83/83 MS: 1 ChangeBit- 00:08:39.722 [2024-10-09 00:16:10.323900] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069867569151 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.722 [2024-10-09 00:16:10.323933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:39.722 [2024-10-09 00:16:10.323971] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.722 [2024-10-09 00:16:10.323990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:39.722 [2024-10-09 00:16:10.324049] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446743021442564095 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.722 [2024-10-09 00:16:10.324065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:39.722 [2024-10-09 00:16:10.324121] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.722 [2024-10-09 00:16:10.324136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:39.981 #46 NEW cov: 12400 ft: 13064 corp: 4/251b lim: 100 exec/s: 0 rss: 74Mb L: 84/84 MS: 1 CrossOver- 00:08:39.981 [2024-10-09 00:16:10.383864] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.981 [2024-10-09 00:16:10.383893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:39.981 [2024-10-09 00:16:10.383938] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.981 [2024-10-09 00:16:10.383953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:39.981 [2024-10-09 00:16:10.384009] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:17870283321406128127 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.981 [2024-10-09 00:16:10.384024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:39.981 #48 NEW cov: 12485 ft: 13847 corp: 5/315b lim: 100 exec/s: 0 rss: 74Mb L: 64/84 MS: 2 CopyPart-CrossOver- 00:08:39.981 [2024-10-09 00:16:10.424009] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069867569151 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.981 [2024-10-09 00:16:10.424038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:39.981 [2024-10-09 00:16:10.424079] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.981 [2024-10-09 00:16:10.424096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:39.981 [2024-10-09 00:16:10.424155] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.981 [2024-10-09 00:16:10.424170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:39.981 #49 NEW cov: 12485 ft: 13967 corp: 6/380b lim: 100 exec/s: 0 rss: 74Mb L: 65/84 MS: 1 EraseBytes- 00:08:39.981 [2024-10-09 00:16:10.464235] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069867569151 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.981 [2024-10-09 00:16:10.464263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:39.981 [2024-10-09 00:16:10.464321] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.981 [2024-10-09 00:16:10.464337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:39.981 [2024-10-09 00:16:10.464397] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446743021442564095 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.981 [2024-10-09 00:16:10.464413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:39.981 [2024-10-09 00:16:10.464470] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.981 [2024-10-09 00:16:10.464486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:39.981 #50 NEW cov: 12485 ft: 14005 corp: 7/465b lim: 100 exec/s: 0 rss: 74Mb L: 85/85 MS: 1 InsertByte- 00:08:39.981 [2024-10-09 00:16:10.524264] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:7378697629483820646 len:26215 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.981 [2024-10-09 00:16:10.524292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:39.981 [2024-10-09 00:16:10.524358] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:7378697629483820646 len:26215 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.981 [2024-10-09 00:16:10.524376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:39.981 [2024-10-09 00:16:10.524434] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:7378697629483820646 len:26215 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.981 [2024-10-09 00:16:10.524448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:39.981 #51 NEW cov: 12485 ft: 14068 corp: 8/527b lim: 100 exec/s: 0 rss: 74Mb L: 62/85 MS: 1 InsertRepeatedBytes- 00:08:39.981 [2024-10-09 00:16:10.564517] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069867569151 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.981 [2024-10-09 00:16:10.564544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:39.981 [2024-10-09 00:16:10.564595] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.981 [2024-10-09 00:16:10.564612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:39.981 [2024-10-09 00:16:10.564664] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446719884453740543 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.981 [2024-10-09 00:16:10.564680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:39.981 [2024-10-09 00:16:10.564735] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.981 [2024-10-09 00:16:10.564751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:39.981 #52 NEW cov: 12485 ft: 14118 corp: 9/612b lim: 100 exec/s: 0 rss: 74Mb L: 85/85 MS: 1 InsertByte- 00:08:39.982 [2024-10-09 00:16:10.604174] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:5208492444341520456 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.982 [2024-10-09 00:16:10.604202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:40.241 #54 NEW cov: 12485 ft: 14962 corp: 10/635b lim: 100 exec/s: 0 rss: 74Mb L: 23/85 MS: 2 ChangeBit-InsertRepeatedBytes- 00:08:40.241 [2024-10-09 00:16:10.644775] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069867569151 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.241 [2024-10-09 00:16:10.644805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:40.241 [2024-10-09 00:16:10.644849] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.241 [2024-10-09 00:16:10.644866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:40.241 [2024-10-09 00:16:10.644921] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:9621243513505054719 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.241 [2024-10-09 00:16:10.644938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:40.241 [2024-10-09 00:16:10.644994] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.241 [2024-10-09 00:16:10.645010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:40.241 #55 NEW cov: 12485 ft: 14990 corp: 11/721b lim: 100 exec/s: 0 rss: 74Mb L: 86/86 MS: 1 InsertRepeatedBytes- 00:08:40.241 [2024-10-09 00:16:10.684575] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069867569151 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.241 [2024-10-09 00:16:10.684602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:40.241 [2024-10-09 00:16:10.684654] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.241 [2024-10-09 00:16:10.684670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:40.241 #61 NEW cov: 12485 ft: 15363 corp: 12/780b lim: 100 exec/s: 0 rss: 74Mb L: 59/86 MS: 1 CrossOver- 00:08:40.241 [2024-10-09 00:16:10.724788] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069867569151 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.241 [2024-10-09 00:16:10.724818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:40.241 [2024-10-09 00:16:10.724866] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.241 [2024-10-09 00:16:10.724882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:40.241 [2024-10-09 00:16:10.724939] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.241 [2024-10-09 00:16:10.724955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:40.241 #62 NEW cov: 12485 ft: 15421 corp: 13/846b lim: 100 exec/s: 0 rss: 74Mb L: 66/86 MS: 1 EraseBytes- 00:08:40.241 [2024-10-09 00:16:10.765088] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.241 [2024-10-09 00:16:10.765116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:40.241 [2024-10-09 00:16:10.765169] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.241 [2024-10-09 00:16:10.765186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:40.241 [2024-10-09 00:16:10.765241] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073708109578 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.241 [2024-10-09 00:16:10.765261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:40.241 [2024-10-09 00:16:10.765317] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.241 [2024-10-09 00:16:10.765332] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:40.241 NEW_FUNC[1/1]: 0x1bfc8d8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:658 00:08:40.241 #63 NEW cov: 12508 ft: 15491 corp: 14/926b lim: 100 exec/s: 0 rss: 74Mb L: 80/86 MS: 1 EraseBytes- 00:08:40.241 [2024-10-09 00:16:10.825334] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069867569151 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.241 [2024-10-09 00:16:10.825364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:40.241 [2024-10-09 00:16:10.825411] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.241 [2024-10-09 00:16:10.825427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:40.241 [2024-10-09 00:16:10.825485] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446719884453740543 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.241 [2024-10-09 00:16:10.825502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:40.242 [2024-10-09 00:16:10.825556] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.242 [2024-10-09 00:16:10.825572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:40.242 #64 NEW cov: 12508 ft: 15603 corp: 15/1011b lim: 100 exec/s: 0 rss: 74Mb L: 85/86 MS: 1 ShuffleBytes- 00:08:40.242 [2024-10-09 00:16:10.865253] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069867569151 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.242 [2024-10-09 00:16:10.865281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:40.242 [2024-10-09 00:16:10.865332] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.242 [2024-10-09 00:16:10.865349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:40.242 [2024-10-09 00:16:10.865406] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709543423 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.242 [2024-10-09 00:16:10.865423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:40.501 #65 NEW cov: 12508 ft: 15635 corp: 16/1077b lim: 100 exec/s: 65 rss: 74Mb L: 66/86 MS: 1 ChangeBit- 00:08:40.501 [2024-10-09 00:16:10.925419] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069867569151 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.501 [2024-10-09 00:16:10.925447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:40.501 [2024-10-09 00:16:10.925496] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.501 [2024-10-09 00:16:10.925511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:40.501 [2024-10-09 00:16:10.925572] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.501 [2024-10-09 00:16:10.925588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:40.501 #66 NEW cov: 12508 ft: 15662 corp: 17/1142b lim: 100 exec/s: 66 rss: 74Mb L: 65/86 MS: 1 ShuffleBytes- 00:08:40.501 [2024-10-09 00:16:10.985780] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069867569151 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.501 [2024-10-09 00:16:10.985808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:40.501 [2024-10-09 00:16:10.985866] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.501 [2024-10-09 00:16:10.985883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:40.501 [2024-10-09 00:16:10.985940] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446743021442564095 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.501 [2024-10-09 00:16:10.985958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:40.501 [2024-10-09 00:16:10.986016] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.501 [2024-10-09 00:16:10.986033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:40.501 #67 NEW cov: 12508 ft: 15694 corp: 18/1226b lim: 100 exec/s: 67 rss: 74Mb L: 84/86 MS: 1 ChangeByte- 00:08:40.501 [2024-10-09 00:16:11.025676] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069867569151 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.501 [2024-10-09 00:16:11.025704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:40.501 [2024-10-09 00:16:11.025753] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.501 [2024-10-09 00:16:11.025768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:40.501 [2024-10-09 00:16:11.025829] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.501 [2024-10-09 00:16:11.025845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:40.501 #68 NEW cov: 12508 ft: 15706 corp: 19/1293b lim: 100 exec/s: 68 rss: 74Mb L: 67/86 MS: 1 InsertByte- 00:08:40.501 [2024-10-09 00:16:11.065999] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069867569151 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.501 [2024-10-09 00:16:11.066026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:40.501 [2024-10-09 00:16:11.066085] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.501 [2024-10-09 00:16:11.066100] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:40.501 [2024-10-09 00:16:11.066159] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446743021442564095 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.501 [2024-10-09 00:16:11.066176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:40.501 [2024-10-09 00:16:11.066236] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.501 [2024-10-09 00:16:11.066252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:40.501 #69 NEW cov: 12508 ft: 15772 corp: 20/1378b lim: 100 exec/s: 69 rss: 74Mb L: 85/86 MS: 1 ShuffleBytes- 00:08:40.501 [2024-10-09 00:16:11.126406] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069867569151 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.501 [2024-10-09 00:16:11.126436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:40.501 [2024-10-09 00:16:11.126493] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18377782704415440895 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.501 [2024-10-09 00:16:11.126510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:40.501 [2024-10-09 00:16:11.126564] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.501 [2024-10-09 00:16:11.126581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:40.501 [2024-10-09 00:16:11.126639] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551370 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.501 [2024-10-09 00:16:11.126656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:40.501 [2024-10-09 00:16:11.126717] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:4 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.501 [2024-10-09 00:16:11.126733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:40.760 #70 NEW cov: 12508 ft: 15865 corp: 21/1478b lim: 100 exec/s: 70 rss: 74Mb L: 100/100 MS: 1 CopyPart- 00:08:40.760 [2024-10-09 00:16:11.186213] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069867569151 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.760 [2024-10-09 00:16:11.186244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:40.760 [2024-10-09 00:16:11.186284] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.760 [2024-10-09 00:16:11.186301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:40.760 [2024-10-09 00:16:11.186358] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:3819052484010180607 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.760 [2024-10-09 00:16:11.186375] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:40.760 #71 NEW cov: 12508 ft: 15892 corp: 22/1545b lim: 100 exec/s: 71 rss: 74Mb L: 67/100 MS: 1 InsertByte- 00:08:40.760 [2024-10-09 00:16:11.226425] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069867569151 len:2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.760 [2024-10-09 00:16:11.226453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:40.760 [2024-10-09 00:16:11.226507] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.760 [2024-10-09 00:16:11.226526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:40.760 [2024-10-09 00:16:11.226579] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:34304 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.760 [2024-10-09 00:16:11.226594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:40.760 [2024-10-09 00:16:11.226651] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709549567 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.760 [2024-10-09 00:16:11.226667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:40.760 #72 NEW cov: 12508 ft: 15940 corp: 23/1635b lim: 100 exec/s: 72 rss: 74Mb L: 90/100 MS: 1 CMP- DE: "\000\000\000\001"- 00:08:40.760 [2024-10-09 00:16:11.286326] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069867569151 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.760 [2024-10-09 00:16:11.286354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:40.760 [2024-10-09 00:16:11.286407] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.760 [2024-10-09 00:16:11.286423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:40.760 #73 NEW cov: 12508 ft: 15948 corp: 24/1683b lim: 100 exec/s: 73 rss: 74Mb L: 48/100 MS: 1 EraseBytes- 00:08:40.761 [2024-10-09 00:16:11.346804] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069867569151 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.761 [2024-10-09 00:16:11.346852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:40.761 [2024-10-09 00:16:11.346912] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.761 [2024-10-09 00:16:11.346930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:40.761 [2024-10-09 00:16:11.346986] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744069615910911 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.761 [2024-10-09 00:16:11.347003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:40.761 [2024-10-09 00:16:11.347059] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.761 [2024-10-09 00:16:11.347075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:40.761 #74 NEW cov: 12508 ft: 15959 corp: 25/1774b lim: 100 exec/s: 74 rss: 74Mb L: 91/100 MS: 1 CopyPart- 00:08:41.019 [2024-10-09 00:16:11.406676] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069867569151 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.019 [2024-10-09 00:16:11.406703] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:41.019 [2024-10-09 00:16:11.406745] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073707454463 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.019 [2024-10-09 00:16:11.406762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:41.019 #75 NEW cov: 12508 ft: 15966 corp: 26/1819b lim: 100 exec/s: 75 rss: 74Mb L: 45/100 MS: 1 EraseBytes- 00:08:41.019 [2024-10-09 00:16:11.466945] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069867569151 len:512 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.019 [2024-10-09 00:16:11.466974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:41.019 [2024-10-09 00:16:11.467013] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.019 [2024-10-09 00:16:11.467030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:41.019 [2024-10-09 00:16:11.467087] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.019 [2024-10-09 00:16:11.467103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:41.019 #76 NEW cov: 12508 ft: 16030 corp: 27/1884b lim: 100 exec/s: 76 rss: 75Mb L: 65/100 MS: 1 ChangeBinInt- 00:08:41.019 [2024-10-09 00:16:11.527297] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.019 [2024-10-09 00:16:11.527325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:41.019 [2024-10-09 00:16:11.527376] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.019 [2024-10-09 00:16:11.527391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:41.019 [2024-10-09 00:16:11.527445] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.019 [2024-10-09 00:16:11.527461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:41.019 [2024-10-09 00:16:11.527518] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.019 [2024-10-09 00:16:11.527534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:41.019 #80 NEW cov: 12508 ft: 16065 corp: 28/1983b lim: 100 exec/s: 80 rss: 75Mb L: 99/100 MS: 4 CopyPart-InsertByte-PersAutoDict-InsertRepeatedBytes- DE: "\000\000\000\001"- 00:08:41.019 [2024-10-09 00:16:11.567368] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069867569151 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.019 [2024-10-09 00:16:11.567397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:41.019 [2024-10-09 00:16:11.567447] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.019 [2024-10-09 00:16:11.567463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:41.019 [2024-10-09 00:16:11.567522] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446719884453740543 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.019 [2024-10-09 00:16:11.567538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:41.019 [2024-10-09 00:16:11.567594] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.019 [2024-10-09 00:16:11.567609] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:41.019 #81 NEW cov: 12508 ft: 16081 corp: 29/2074b lim: 100 exec/s: 81 rss: 75Mb L: 91/100 MS: 1 CopyPart- 00:08:41.019 [2024-10-09 00:16:11.627058] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:5208492444341520456 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.019 [2024-10-09 00:16:11.627086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:41.279 #82 NEW cov: 12508 ft: 16130 corp: 30/2100b lim: 100 exec/s: 82 rss: 75Mb L: 26/100 MS: 1 CopyPart- 00:08:41.279 [2024-10-09 00:16:11.687760] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069867569151 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.279 [2024-10-09 00:16:11.687789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:41.279 [2024-10-09 00:16:11.687834] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.279 [2024-10-09 00:16:11.687852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:41.279 [2024-10-09 00:16:11.687924] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446719884453740543 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.279 [2024-10-09 00:16:11.687941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:41.279 [2024-10-09 00:16:11.687999] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.279 [2024-10-09 00:16:11.688014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:41.279 #83 NEW cov: 12508 ft: 16140 corp: 31/2185b lim: 100 exec/s: 83 rss: 75Mb L: 85/100 MS: 1 ShuffleBytes- 00:08:41.279 [2024-10-09 00:16:11.727678] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069867569151 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.279 [2024-10-09 00:16:11.727705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:41.279 [2024-10-09 00:16:11.727744] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.279 [2024-10-09 00:16:11.727762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:41.279 [2024-10-09 00:16:11.727822] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.279 [2024-10-09 00:16:11.727839] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:41.279 #84 NEW cov: 12508 ft: 16148 corp: 32/2256b lim: 100 exec/s: 84 rss: 75Mb L: 71/100 MS: 1 InsertRepeatedBytes- 00:08:41.279 [2024-10-09 00:16:11.767933] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069867569151 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.279 [2024-10-09 00:16:11.767961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:41.279 [2024-10-09 00:16:11.768007] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.279 [2024-10-09 00:16:11.768023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:41.279 [2024-10-09 00:16:11.768080] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:9621243513505054719 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.279 [2024-10-09 00:16:11.768099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:41.279 [2024-10-09 00:16:11.768157] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.279 [2024-10-09 00:16:11.768173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:41.279 #85 NEW cov: 12508 ft: 16234 corp: 33/2342b lim: 100 exec/s: 85 rss: 75Mb L: 86/100 MS: 1 ShuffleBytes- 00:08:41.279 [2024-10-09 00:16:11.808197] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069867569151 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.279 [2024-10-09 00:16:11.808225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:41.279 [2024-10-09 00:16:11.808275] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.279 [2024-10-09 00:16:11.808292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:41.279 [2024-10-09 00:16:11.808351] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446719884453740543 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.279 [2024-10-09 00:16:11.808367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:41.279 [2024-10-09 00:16:11.808425] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.279 [2024-10-09 00:16:11.808441] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:41.279 #86 NEW cov: 12508 ft: 16245 corp: 34/2433b lim: 100 exec/s: 86 rss: 75Mb L: 91/100 MS: 1 ChangeByte- 00:08:41.279 [2024-10-09 00:16:11.868043] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069867569151 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.279 [2024-10-09 00:16:11.868070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:41.279 [2024-10-09 00:16:11.868108] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.279 [2024-10-09 00:16:11.868124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:41.279 [2024-10-09 00:16:11.868180] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65531 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.279 [2024-10-09 00:16:11.868197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:41.279 #87 NEW cov: 12508 ft: 16262 corp: 35/2500b lim: 100 exec/s: 87 rss: 75Mb L: 67/100 MS: 1 ChangeBinInt- 00:08:41.279 [2024-10-09 00:16:11.908367] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069867569151 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.279 [2024-10-09 00:16:11.908395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:41.279 [2024-10-09 00:16:11.908447] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.279 [2024-10-09 00:16:11.908465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:41.280 [2024-10-09 00:16:11.908522] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.280 [2024-10-09 00:16:11.908545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:41.280 [2024-10-09 00:16:11.908605] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446743287730536447 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.280 [2024-10-09 00:16:11.908622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:41.539 #88 NEW cov: 12508 ft: 16270 corp: 36/2599b lim: 100 exec/s: 44 rss: 75Mb L: 99/100 MS: 1 CrossOver- 00:08:41.539 #88 DONE cov: 12508 ft: 16270 corp: 36/2599b lim: 100 exec/s: 44 rss: 75Mb 00:08:41.539 ###### Recommended dictionary. ###### 00:08:41.539 "\000\000\000\001" # Uses: 1 00:08:41.539 ###### End of recommended dictionary. ###### 00:08:41.539 Done 88 runs in 2 second(s) 00:08:41.539 00:16:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_24.conf /var/tmp/suppress_nvmf_fuzz 00:08:41.539 00:16:12 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:41.539 00:16:12 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:41.539 00:16:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@79 -- # trap - SIGINT SIGTERM EXIT 00:08:41.539 00:08:41.539 real 1m5.764s 00:08:41.539 user 1m41.603s 00:08:41.539 sys 0m7.793s 00:08:41.539 00:16:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:41.539 00:16:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:08:41.539 ************************************ 00:08:41.539 END TEST nvmf_llvm_fuzz 00:08:41.539 ************************************ 00:08:41.539 00:16:12 llvm_fuzz -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:08:41.539 00:16:12 llvm_fuzz -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:08:41.539 00:16:12 llvm_fuzz -- fuzz/llvm.sh@20 -- # run_test vfio_llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:08:41.539 00:16:12 llvm_fuzz -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:41.539 00:16:12 llvm_fuzz -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:41.539 00:16:12 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:08:41.539 ************************************ 00:08:41.539 START TEST vfio_llvm_fuzz 00:08:41.539 ************************************ 00:08:41.539 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:08:41.800 * Looking for test storage... 00:08:41.800 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:41.800 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:41.800 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1681 -- # lcov --version 00:08:41.800 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:41.800 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:41.800 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:41.800 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:41.800 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:41.800 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:08:41.800 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:08:41.800 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:08:41.800 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:08:41.800 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:08:41.800 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:08:41.800 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:08:41.800 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:41.800 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:08:41.800 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:08:41.800 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:41.800 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:41.800 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:08:41.800 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:08:41.800 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:41.800 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:08:41.800 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:08:41.800 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:08:41.800 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:08:41.800 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:41.800 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:08:41.800 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:08:41.800 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:41.800 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:41.800 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:08:41.800 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:41.800 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:41.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.800 --rc genhtml_branch_coverage=1 00:08:41.800 --rc genhtml_function_coverage=1 00:08:41.800 --rc genhtml_legend=1 00:08:41.800 --rc geninfo_all_blocks=1 00:08:41.800 --rc geninfo_unexecuted_blocks=1 00:08:41.800 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:41.800 ' 00:08:41.800 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:41.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.800 --rc genhtml_branch_coverage=1 00:08:41.800 --rc genhtml_function_coverage=1 00:08:41.800 --rc genhtml_legend=1 00:08:41.800 --rc geninfo_all_blocks=1 00:08:41.800 --rc geninfo_unexecuted_blocks=1 00:08:41.800 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:41.800 ' 00:08:41.800 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:41.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.800 --rc genhtml_branch_coverage=1 00:08:41.800 --rc genhtml_function_coverage=1 00:08:41.800 --rc genhtml_legend=1 00:08:41.800 --rc geninfo_all_blocks=1 00:08:41.800 --rc geninfo_unexecuted_blocks=1 00:08:41.800 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:41.800 ' 00:08:41.800 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:41.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.800 --rc genhtml_branch_coverage=1 00:08:41.800 --rc genhtml_function_coverage=1 00:08:41.800 --rc genhtml_legend=1 00:08:41.800 --rc geninfo_all_blocks=1 00:08:41.800 --rc geninfo_unexecuted_blocks=1 00:08:41.800 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:41.800 ' 00:08:41.800 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@64 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:08:41.800 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:08:41.800 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:08:41.800 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@34 -- # set -e 00:08:41.800 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:08:41.800 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@36 -- # shopt -s extglob 00:08:41.800 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:08:41.800 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:08:41.800 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:08:41.800 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:08:41.800 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:41.800 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:08:41.800 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:41.800 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:41.800 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:08:41.800 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:41.800 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:41.801 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:41.801 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:41.801 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:41.801 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:41.801 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:41.801 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:41.801 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:41.801 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:41.801 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:41.801 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:08:41.801 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:41.801 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:08:41.801 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:08:41.801 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:08:41.801 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@22 -- # CONFIG_CET=n 00:08:41.801 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:41.801 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:08:41.801 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:08:41.801 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@26 -- # CONFIG_AIO_FSDEV=y 00:08:41.801 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@27 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:41.801 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@28 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:41.801 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@29 -- # CONFIG_UBLK=y 00:08:41.801 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@30 -- # CONFIG_ISAL_CRYPTO=y 00:08:41.801 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@31 -- # CONFIG_OPENSSL_PATH= 00:08:41.801 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@32 -- # CONFIG_OCF=n 00:08:41.801 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@33 -- # CONFIG_FUSE=n 00:08:41.801 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@34 -- # CONFIG_VTUNE_DIR= 00:08:41.801 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@35 -- # CONFIG_FUZZER_LIB=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:08:41.801 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@36 -- # CONFIG_FUZZER=y 00:08:41.801 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@37 -- # CONFIG_FSDEV=y 00:08:41.801 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@38 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:08:41.801 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@39 -- # CONFIG_CRYPTO=n 00:08:41.801 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@40 -- # CONFIG_PGO_USE=n 00:08:41.801 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@41 -- # CONFIG_VHOST=y 00:08:41.801 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@42 -- # CONFIG_DAOS=n 00:08:41.801 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@43 -- # CONFIG_DPDK_INC_DIR= 00:08:41.801 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@44 -- # CONFIG_DAOS_DIR= 00:08:41.801 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@45 -- # CONFIG_UNIT_TESTS=n 00:08:41.801 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@46 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:41.801 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@47 -- # CONFIG_VIRTIO=y 00:08:41.801 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@48 -- # CONFIG_DPDK_UADK=n 00:08:41.801 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:08:41.801 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@50 -- # CONFIG_RDMA=y 00:08:41.801 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@51 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:08:41.801 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@52 -- # CONFIG_HAVE_LZ4=n 00:08:41.801 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@53 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:41.801 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@54 -- # CONFIG_URING_PATH= 00:08:41.801 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@55 -- # CONFIG_XNVME=n 00:08:41.801 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@56 -- # CONFIG_VFIO_USER=y 00:08:41.801 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@57 -- # CONFIG_ARCH=native 00:08:41.801 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@58 -- # CONFIG_HAVE_EVP_MAC=y 00:08:41.801 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@59 -- # CONFIG_URING_ZNS=n 00:08:41.801 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@60 -- # CONFIG_WERROR=y 00:08:41.801 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@61 -- # CONFIG_HAVE_LIBBSD=n 00:08:41.801 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@62 -- # CONFIG_UBSAN=y 00:08:41.801 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@63 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:08:41.801 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@64 -- # CONFIG_IPSEC_MB_DIR= 00:08:41.801 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@65 -- # CONFIG_GOLANG=n 00:08:41.801 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@66 -- # CONFIG_ISAL=y 00:08:41.801 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@67 -- # CONFIG_IDXD_KERNEL=y 00:08:41.801 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@68 -- # CONFIG_DPDK_LIB_DIR= 00:08:41.801 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@69 -- # CONFIG_RDMA_PROV=verbs 00:08:41.801 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@70 -- # CONFIG_APPS=y 00:08:41.801 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@71 -- # CONFIG_SHARED=n 00:08:41.801 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@72 -- # CONFIG_HAVE_KEYUTILS=y 00:08:41.801 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@73 -- # CONFIG_FC_PATH= 00:08:41.801 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@74 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:41.801 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@75 -- # CONFIG_FC=n 00:08:41.801 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@76 -- # CONFIG_AVAHI=n 00:08:41.801 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@77 -- # CONFIG_FIO_PLUGIN=y 00:08:41.801 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@78 -- # CONFIG_RAID5F=n 00:08:41.801 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@79 -- # CONFIG_EXAMPLES=y 00:08:41.801 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@80 -- # CONFIG_TESTS=y 00:08:41.801 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@81 -- # CONFIG_CRYPTO_MLX5=n 00:08:41.801 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@82 -- # CONFIG_MAX_LCORES=128 00:08:41.801 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@83 -- # CONFIG_IPSEC_MB=n 00:08:41.801 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@84 -- # CONFIG_PGO_DIR= 00:08:41.801 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@85 -- # CONFIG_DEBUG=y 00:08:41.801 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@86 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:41.801 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@87 -- # CONFIG_CROSS_PREFIX= 00:08:41.801 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@88 -- # CONFIG_COPY_FILE_RANGE=y 00:08:41.801 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@89 -- # CONFIG_URING=n 00:08:41.801 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:08:41.801 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:08:41.801 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:08:41.801 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:08:41.801 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:41.801 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:08:41.801 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:08:41.801 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:08:41.801 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:08:41.801 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:08:41.801 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:08:41.801 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:08:41.801 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:08:41.801 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:08:41.801 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:08:41.801 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:08:41.801 #define SPDK_CONFIG_H 00:08:41.801 #define SPDK_CONFIG_AIO_FSDEV 1 00:08:41.801 #define SPDK_CONFIG_APPS 1 00:08:41.801 #define SPDK_CONFIG_ARCH native 00:08:41.801 #undef SPDK_CONFIG_ASAN 00:08:41.801 #undef SPDK_CONFIG_AVAHI 00:08:41.801 #undef SPDK_CONFIG_CET 00:08:41.801 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:08:41.801 #define SPDK_CONFIG_COVERAGE 1 00:08:41.801 #define SPDK_CONFIG_CROSS_PREFIX 00:08:41.801 #undef SPDK_CONFIG_CRYPTO 00:08:41.801 #undef SPDK_CONFIG_CRYPTO_MLX5 00:08:41.801 #undef SPDK_CONFIG_CUSTOMOCF 00:08:41.801 #undef SPDK_CONFIG_DAOS 00:08:41.801 #define SPDK_CONFIG_DAOS_DIR 00:08:41.801 #define SPDK_CONFIG_DEBUG 1 00:08:41.801 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:08:41.801 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:08:41.801 #define SPDK_CONFIG_DPDK_INC_DIR 00:08:41.801 #define SPDK_CONFIG_DPDK_LIB_DIR 00:08:41.801 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:08:41.801 #undef SPDK_CONFIG_DPDK_UADK 00:08:41.801 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:08:41.801 #define SPDK_CONFIG_EXAMPLES 1 00:08:41.801 #undef SPDK_CONFIG_FC 00:08:41.801 #define SPDK_CONFIG_FC_PATH 00:08:41.801 #define SPDK_CONFIG_FIO_PLUGIN 1 00:08:41.801 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:08:41.801 #define SPDK_CONFIG_FSDEV 1 00:08:41.801 #undef SPDK_CONFIG_FUSE 00:08:41.801 #define SPDK_CONFIG_FUZZER 1 00:08:41.801 #define SPDK_CONFIG_FUZZER_LIB /usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:08:41.801 #undef SPDK_CONFIG_GOLANG 00:08:41.801 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:08:41.801 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:08:41.801 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:08:41.801 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:08:41.801 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:08:41.801 #undef SPDK_CONFIG_HAVE_LIBBSD 00:08:41.801 #undef SPDK_CONFIG_HAVE_LZ4 00:08:41.801 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:08:41.802 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:08:41.802 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:08:41.802 #define SPDK_CONFIG_IDXD 1 00:08:41.802 #define SPDK_CONFIG_IDXD_KERNEL 1 00:08:41.802 #undef SPDK_CONFIG_IPSEC_MB 00:08:41.802 #define SPDK_CONFIG_IPSEC_MB_DIR 00:08:41.802 #define SPDK_CONFIG_ISAL 1 00:08:41.802 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:08:41.802 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:08:41.802 #define SPDK_CONFIG_LIBDIR 00:08:41.802 #undef SPDK_CONFIG_LTO 00:08:41.802 #define SPDK_CONFIG_MAX_LCORES 128 00:08:41.802 #define SPDK_CONFIG_NVME_CUSE 1 00:08:41.802 #undef SPDK_CONFIG_OCF 00:08:41.802 #define SPDK_CONFIG_OCF_PATH 00:08:41.802 #define SPDK_CONFIG_OPENSSL_PATH 00:08:41.802 #undef SPDK_CONFIG_PGO_CAPTURE 00:08:41.802 #define SPDK_CONFIG_PGO_DIR 00:08:41.802 #undef SPDK_CONFIG_PGO_USE 00:08:41.802 #define SPDK_CONFIG_PREFIX /usr/local 00:08:41.802 #undef SPDK_CONFIG_RAID5F 00:08:41.802 #undef SPDK_CONFIG_RBD 00:08:41.802 #define SPDK_CONFIG_RDMA 1 00:08:41.802 #define SPDK_CONFIG_RDMA_PROV verbs 00:08:41.802 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:08:41.802 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:08:41.802 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:08:41.802 #undef SPDK_CONFIG_SHARED 00:08:41.802 #undef SPDK_CONFIG_SMA 00:08:41.802 #define SPDK_CONFIG_TESTS 1 00:08:41.802 #undef SPDK_CONFIG_TSAN 00:08:41.802 #define SPDK_CONFIG_UBLK 1 00:08:41.802 #define SPDK_CONFIG_UBSAN 1 00:08:41.802 #undef SPDK_CONFIG_UNIT_TESTS 00:08:41.802 #undef SPDK_CONFIG_URING 00:08:41.802 #define SPDK_CONFIG_URING_PATH 00:08:41.802 #undef SPDK_CONFIG_URING_ZNS 00:08:41.802 #undef SPDK_CONFIG_USDT 00:08:41.802 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:08:41.802 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:08:41.802 #define SPDK_CONFIG_VFIO_USER 1 00:08:41.802 #define SPDK_CONFIG_VFIO_USER_DIR 00:08:41.802 #define SPDK_CONFIG_VHOST 1 00:08:41.802 #define SPDK_CONFIG_VIRTIO 1 00:08:41.802 #undef SPDK_CONFIG_VTUNE 00:08:41.802 #define SPDK_CONFIG_VTUNE_DIR 00:08:41.802 #define SPDK_CONFIG_WERROR 1 00:08:41.802 #define SPDK_CONFIG_WPDK_DIR 00:08:41.802 #undef SPDK_CONFIG_XNVME 00:08:41.802 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:08:41.802 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:08:41.802 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:08:41.802 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:08:41.802 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:41.802 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:41.802 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:41.802 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.802 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.802 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.802 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@5 -- # export PATH 00:08:41.802 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.802 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:08:41.802 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:08:41.802 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:08:41.802 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:08:41.802 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:08:41.802 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:41.802 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- pm/common@64 -- # TEST_TAG=N/A 00:08:41.802 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:08:41.802 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:08:41.802 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- pm/common@68 -- # uname -s 00:08:41.802 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- pm/common@68 -- # PM_OS=Linux 00:08:41.802 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:08:41.802 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:08:41.802 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:08:41.802 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:08:41.802 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:08:41.802 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:08:41.802 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- pm/common@76 -- # SUDO[0]= 00:08:41.802 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- pm/common@76 -- # SUDO[1]='sudo -E' 00:08:41.802 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:08:41.802 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:08:41.802 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ Linux == Linux ]] 00:08:41.802 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:08:41.802 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:08:41.802 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:08:41.802 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:08:41.802 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power ]] 00:08:41.802 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@58 -- # : 0 00:08:41.802 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:08:41.802 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@62 -- # : 0 00:08:41.802 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:08:41.802 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@64 -- # : 0 00:08:41.802 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:08:41.802 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@66 -- # : 1 00:08:41.802 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:08:41.802 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@68 -- # : 0 00:08:41.802 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:08:41.802 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@70 -- # : 00:08:41.802 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:08:41.802 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@72 -- # : 0 00:08:41.802 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:08:41.802 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@74 -- # : 0 00:08:41.802 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:08:41.802 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@76 -- # : 0 00:08:41.802 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:08:41.802 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@78 -- # : 0 00:08:41.802 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:08:41.802 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@80 -- # : 0 00:08:41.802 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:08:41.802 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@82 -- # : 0 00:08:41.802 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:08:41.802 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@84 -- # : 0 00:08:41.802 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:08:41.802 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@86 -- # : 0 00:08:41.802 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:08:41.802 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@88 -- # : 0 00:08:41.802 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:08:41.802 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@90 -- # : 0 00:08:41.802 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:08:41.802 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@92 -- # : 0 00:08:41.802 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:08:41.802 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@94 -- # : 0 00:08:41.802 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:08:41.802 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@96 -- # : 0 00:08:41.802 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:08:41.802 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@98 -- # : 1 00:08:41.802 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:08:41.802 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@100 -- # : 1 00:08:41.802 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:08:41.802 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@102 -- # : rdma 00:08:41.802 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:08:41.802 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@104 -- # : 0 00:08:41.802 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:08:41.802 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@106 -- # : 0 00:08:41.802 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:08:41.802 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@108 -- # : 0 00:08:41.803 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:08:41.803 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@110 -- # : 0 00:08:41.803 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:08:41.803 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@112 -- # : 0 00:08:41.803 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:08:41.803 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@114 -- # : 0 00:08:41.803 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:08:41.803 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@116 -- # : 0 00:08:41.803 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:08:41.803 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@118 -- # : 0 00:08:41.803 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:08:41.803 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@120 -- # : 0 00:08:41.803 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:08:41.803 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@122 -- # : 0 00:08:41.803 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:08:41.803 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@124 -- # : 1 00:08:41.803 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:08:41.803 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@126 -- # : 00:08:41.803 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:08:41.803 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@128 -- # : 0 00:08:41.803 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:08:41.803 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@130 -- # : 0 00:08:41.803 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:08:41.803 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@132 -- # : 0 00:08:41.803 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:08:41.803 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@134 -- # : 0 00:08:41.803 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:08:41.803 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@136 -- # : 0 00:08:41.803 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:08:41.803 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@138 -- # : 0 00:08:41.803 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:08:41.803 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@140 -- # : 00:08:41.803 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:08:41.803 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@142 -- # : true 00:08:41.803 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:08:41.803 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@144 -- # : 0 00:08:41.803 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:08:41.803 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@146 -- # : 0 00:08:41.803 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:08:41.803 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@148 -- # : 0 00:08:41.803 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:08:41.803 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@150 -- # : 0 00:08:41.803 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:08:41.803 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@152 -- # : 0 00:08:41.803 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:08:41.803 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@154 -- # : 00:08:41.803 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:08:41.803 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@156 -- # : 0 00:08:41.803 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:08:41.803 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@158 -- # : 0 00:08:41.803 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:08:41.803 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@160 -- # : 0 00:08:41.803 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:08:41.803 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@162 -- # : 0 00:08:41.803 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:08:41.803 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@164 -- # : 0 00:08:41.803 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:08:41.803 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@166 -- # : 0 00:08:41.803 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:08:41.803 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@169 -- # : 00:08:41.803 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:08:41.803 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@171 -- # : 0 00:08:41.803 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:08:41.803 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@173 -- # : 0 00:08:41.803 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:08:41.803 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@175 -- # : 1 00:08:41.803 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:08:41.803 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:08:41.803 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:08:41.803 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:08:41.803 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:08:41.803 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:41.803 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:41.803 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:41.803 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:41.803 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:08:41.803 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:08:41.803 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:08:41.803 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:08:41.803 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:08:41.803 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:08:41.803 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:41.803 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:41.803 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:41.803 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:41.803 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:08:41.803 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:08:41.803 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@204 -- # cat 00:08:41.803 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:08:41.803 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:41.803 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:41.803 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:41.803 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:41.804 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:08:41.804 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:08:41.804 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:08:41.804 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:08:41.804 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:08:41.804 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:08:41.804 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:41.804 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:41.804 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:41.804 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:41.804 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:41.804 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:41.804 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:41.804 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:41.804 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:08:41.804 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:08:41.804 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@267 -- # _LCOV= 00:08:41.804 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:08:41.804 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@268 -- # [[ 1 -eq 1 ]] 00:08:41.804 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@268 -- # _LCOV=1 00:08:41.804 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:08:41.804 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:08:41.804 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@273 -- # lcov_opt='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:08:41.804 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:08:41.804 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@277 -- # export valgrind= 00:08:41.804 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@277 -- # valgrind= 00:08:41.804 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@283 -- # uname -s 00:08:41.804 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:08:41.804 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:08:42.062 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:08:42.062 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:08:42.062 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@287 -- # MAKE=make 00:08:42.062 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j72 00:08:42.062 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:08:42.062 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:08:42.062 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:08:42.062 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@307 -- # TEST_MODE= 00:08:42.062 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@329 -- # [[ -z 3894167 ]] 00:08:42.062 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@329 -- # kill -0 3894167 00:08:42.062 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:08:42.062 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:08:42.062 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:08:42.062 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@342 -- # local mount target_dir 00:08:42.062 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:08:42.062 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:08:42.062 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:08:42.062 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:08:42.062 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.qHyp4y 00:08:42.062 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:08:42.062 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:08:42.062 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:08:42.062 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio /tmp/spdk.qHyp4y/tests/vfio /tmp/spdk.qHyp4y 00:08:42.062 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:08:42.062 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:08:42.062 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@338 -- # df -T 00:08:42.062 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:08:42.062 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:08:42.062 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:08:42.062 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:08:42.062 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:08:42.062 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:08:42.062 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:08:42.062 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:08:42.062 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:08:42.062 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=722997248 00:08:42.062 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:08:42.062 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=4561432576 00:08:42.062 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:08:42.062 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:08:42.062 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:08:42.062 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=86305878016 00:08:42.062 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=94500294656 00:08:42.062 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=8194416640 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=47246716928 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=47250145280 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=3428352 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=18894159872 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=18900062208 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=5902336 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=47249551360 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=47250149376 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=598016 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=9450016768 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=9450029056 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:08:42.063 * Looking for test storage... 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@379 -- # local target_space new_size 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@383 -- # mount=/ 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@385 -- # target_space=86305878016 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@392 -- # new_size=10409009152 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:42.063 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@400 -- # return 0 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1668 -- # set -o errtrace 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1672 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1673 -- # true 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1675 -- # xtrace_fd 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@27 -- # exec 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@29 -- # exec 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@31 -- # xtrace_restore 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@18 -- # set -x 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1681 -- # lcov --version 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:42.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.063 --rc genhtml_branch_coverage=1 00:08:42.063 --rc genhtml_function_coverage=1 00:08:42.063 --rc genhtml_legend=1 00:08:42.063 --rc geninfo_all_blocks=1 00:08:42.063 --rc geninfo_unexecuted_blocks=1 00:08:42.063 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:42.063 ' 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:42.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.063 --rc genhtml_branch_coverage=1 00:08:42.063 --rc genhtml_function_coverage=1 00:08:42.063 --rc genhtml_legend=1 00:08:42.063 --rc geninfo_all_blocks=1 00:08:42.063 --rc geninfo_unexecuted_blocks=1 00:08:42.063 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:42.063 ' 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:42.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.063 --rc genhtml_branch_coverage=1 00:08:42.063 --rc genhtml_function_coverage=1 00:08:42.063 --rc genhtml_legend=1 00:08:42.063 --rc geninfo_all_blocks=1 00:08:42.063 --rc geninfo_unexecuted_blocks=1 00:08:42.063 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:42.063 ' 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:42.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.063 --rc genhtml_branch_coverage=1 00:08:42.063 --rc genhtml_function_coverage=1 00:08:42.063 --rc genhtml_legend=1 00:08:42.063 --rc geninfo_all_blocks=1 00:08:42.063 --rc geninfo_unexecuted_blocks=1 00:08:42.063 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:42.063 ' 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@65 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/../common.sh 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@8 -- # pids=() 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@67 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@68 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@68 -- # fuzz_num=7 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@69 -- # (( fuzz_num != 0 )) 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@71 -- # trap 'cleanup /tmp/vfio-user-* /var/tmp/suppress_vfio_fuzz; exit 1' SIGINT SIGTERM EXIT 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@74 -- # mem_size=0 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@75 -- # [[ 1 -eq 1 ]] 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@76 -- # start_llvm_fuzz_short 7 1 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@69 -- # local fuzz_num=7 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@70 -- # local time=1 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i = 0 )) 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=0 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-0 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-0/domain/1 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-0/domain/2 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-0/fuzz_vfio_json.conf 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-0 /tmp/vfio-user-0/domain/1 /tmp/vfio-user-0/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-0/domain/1%; 00:08:42.063 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-0/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:42.063 00:16:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-0/domain/1 -c /tmp/vfio-user-0/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 -Y /tmp/vfio-user-0/domain/2 -r /tmp/vfio-user-0/spdk0.sock -Z 0 00:08:42.063 [2024-10-09 00:16:12.629369] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:08:42.063 [2024-10-09 00:16:12.629436] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3894229 ] 00:08:42.321 [2024-10-09 00:16:12.706354] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.321 [2024-10-09 00:16:12.789931] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.580 INFO: Running with entropic power schedule (0xFF, 100). 00:08:42.580 INFO: Seed: 1012506865 00:08:42.580 INFO: Loaded 1 modules (381582 inline 8-bit counters): 381582 [0x2bad04c, 0x2c0a2da), 00:08:42.580 INFO: Loaded 1 PC tables (381582 PCs): 381582 [0x2c0a2e0,0x31dcbc0), 00:08:42.580 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:08:42.580 INFO: A corpus is not provided, starting from an empty corpus 00:08:42.580 #2 INITED exec/s: 0 rss: 67Mb 00:08:42.580 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:42.580 This may also happen if the target rejected all inputs we tried so far 00:08:42.580 [2024-10-09 00:16:13.056615] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-0/domain/2: enabling controller 00:08:42.839 NEW_FUNC[1/671]: 0x43b5e8 in fuzz_vfio_user_region_rw /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:84 00:08:42.839 NEW_FUNC[2/671]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:42.839 #19 NEW cov: 11121 ft: 11090 corp: 2/7b lim: 6 exec/s: 0 rss: 74Mb L: 6/6 MS: 2 CMP-CrossOver- DE: "\377\377\377\011"- 00:08:43.096 #20 NEW cov: 11135 ft: 14381 corp: 3/13b lim: 6 exec/s: 0 rss: 75Mb L: 6/6 MS: 1 CrossOver- 00:08:43.096 #26 NEW cov: 11138 ft: 15726 corp: 4/19b lim: 6 exec/s: 0 rss: 76Mb L: 6/6 MS: 1 CrossOver- 00:08:43.355 NEW_FUNC[1/1]: 0x1bc8d28 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:658 00:08:43.355 #27 NEW cov: 11155 ft: 15903 corp: 5/25b lim: 6 exec/s: 0 rss: 76Mb L: 6/6 MS: 1 ShuffleBytes- 00:08:43.355 #28 NEW cov: 11155 ft: 16369 corp: 6/31b lim: 6 exec/s: 0 rss: 76Mb L: 6/6 MS: 1 ChangeBit- 00:08:43.613 #29 NEW cov: 11155 ft: 16596 corp: 7/37b lim: 6 exec/s: 29 rss: 77Mb L: 6/6 MS: 1 PersAutoDict- DE: "\377\377\377\011"- 00:08:43.613 #30 NEW cov: 11155 ft: 16657 corp: 8/43b lim: 6 exec/s: 30 rss: 77Mb L: 6/6 MS: 1 ChangeByte- 00:08:43.871 #35 NEW cov: 11155 ft: 17013 corp: 9/49b lim: 6 exec/s: 35 rss: 77Mb L: 6/6 MS: 5 EraseBytes-CopyPart-CrossOver-ChangeBit-InsertByte- 00:08:43.871 #36 NEW cov: 11155 ft: 17086 corp: 10/55b lim: 6 exec/s: 36 rss: 77Mb L: 6/6 MS: 1 ShuffleBytes- 00:08:44.130 #41 NEW cov: 11155 ft: 17134 corp: 11/61b lim: 6 exec/s: 41 rss: 77Mb L: 6/6 MS: 5 CrossOver-CrossOver-ShuffleBytes-ChangeByte-CrossOver- 00:08:44.130 #42 NEW cov: 11155 ft: 17334 corp: 12/67b lim: 6 exec/s: 42 rss: 77Mb L: 6/6 MS: 1 ChangeByte- 00:08:44.389 #43 NEW cov: 11162 ft: 17638 corp: 13/73b lim: 6 exec/s: 43 rss: 77Mb L: 6/6 MS: 1 PersAutoDict- DE: "\377\377\377\011"- 00:08:44.389 #44 NEW cov: 11162 ft: 17827 corp: 14/79b lim: 6 exec/s: 44 rss: 77Mb L: 6/6 MS: 1 ChangeBinInt- 00:08:44.649 #47 NEW cov: 11162 ft: 17893 corp: 15/85b lim: 6 exec/s: 23 rss: 77Mb L: 6/6 MS: 3 InsertByte-CrossOver-CopyPart- 00:08:44.649 #47 DONE cov: 11162 ft: 17893 corp: 15/85b lim: 6 exec/s: 23 rss: 77Mb 00:08:44.649 ###### Recommended dictionary. ###### 00:08:44.649 "\377\377\377\011" # Uses: 3 00:08:44.649 ###### End of recommended dictionary. ###### 00:08:44.649 Done 47 runs in 2 second(s) 00:08:44.649 [2024-10-09 00:16:15.099034] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-0/domain/2: disabling controller 00:08:44.908 00:16:15 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-0 /var/tmp/suppress_vfio_fuzz 00:08:44.908 00:16:15 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:44.908 00:16:15 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:44.908 00:16:15 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:08:44.908 00:16:15 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=1 00:08:44.908 00:16:15 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:44.908 00:16:15 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:44.908 00:16:15 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:08:44.908 00:16:15 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-1 00:08:44.908 00:16:15 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-1/domain/1 00:08:44.908 00:16:15 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-1/domain/2 00:08:44.908 00:16:15 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-1/fuzz_vfio_json.conf 00:08:44.908 00:16:15 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:44.908 00:16:15 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:44.908 00:16:15 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-1 /tmp/vfio-user-1/domain/1 /tmp/vfio-user-1/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:08:44.908 00:16:15 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-1/domain/1%; 00:08:44.908 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-1/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:44.908 00:16:15 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:44.908 00:16:15 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:44.908 00:16:15 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-1/domain/1 -c /tmp/vfio-user-1/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 -Y /tmp/vfio-user-1/domain/2 -r /tmp/vfio-user-1/spdk1.sock -Z 1 00:08:44.908 [2024-10-09 00:16:15.419600] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:08:44.908 [2024-10-09 00:16:15.419682] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3894591 ] 00:08:44.908 [2024-10-09 00:16:15.497537] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.186 [2024-10-09 00:16:15.582759] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.186 INFO: Running with entropic power schedule (0xFF, 100). 00:08:45.187 INFO: Seed: 3802490300 00:08:45.458 INFO: Loaded 1 modules (381582 inline 8-bit counters): 381582 [0x2bad04c, 0x2c0a2da), 00:08:45.458 INFO: Loaded 1 PC tables (381582 PCs): 381582 [0x2c0a2e0,0x31dcbc0), 00:08:45.458 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:08:45.458 INFO: A corpus is not provided, starting from an empty corpus 00:08:45.458 #2 INITED exec/s: 0 rss: 67Mb 00:08:45.458 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:45.458 This may also happen if the target rejected all inputs we tried so far 00:08:45.458 [2024-10-09 00:16:15.846661] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-1/domain/2: enabling controller 00:08:45.458 [2024-10-09 00:16:15.929732] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:45.458 [2024-10-09 00:16:15.929759] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:45.458 [2024-10-09 00:16:15.929778] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:45.720 NEW_FUNC[1/673]: 0x43bb88 in fuzz_vfio_user_version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:71 00:08:45.720 NEW_FUNC[2/673]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:45.720 #10 NEW cov: 11124 ft: 11091 corp: 2/5b lim: 4 exec/s: 0 rss: 74Mb L: 4/4 MS: 3 CopyPart-InsertByte-CrossOver- 00:08:45.979 [2024-10-09 00:16:16.433630] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:45.979 [2024-10-09 00:16:16.433666] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:45.979 [2024-10-09 00:16:16.433687] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:45.979 #11 NEW cov: 11138 ft: 14446 corp: 3/9b lim: 4 exec/s: 0 rss: 75Mb L: 4/4 MS: 1 ChangeBit- 00:08:46.237 [2024-10-09 00:16:16.636123] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:46.237 [2024-10-09 00:16:16.636146] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:46.237 [2024-10-09 00:16:16.636164] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:46.237 NEW_FUNC[1/1]: 0x1bc8d28 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:658 00:08:46.237 #12 NEW cov: 11155 ft: 15152 corp: 4/13b lim: 4 exec/s: 0 rss: 76Mb L: 4/4 MS: 1 ChangeByte- 00:08:46.237 [2024-10-09 00:16:16.825421] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:46.238 [2024-10-09 00:16:16.825443] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:46.238 [2024-10-09 00:16:16.825461] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:46.496 #13 NEW cov: 11155 ft: 16440 corp: 5/17b lim: 4 exec/s: 13 rss: 76Mb L: 4/4 MS: 1 ChangeByte- 00:08:46.496 [2024-10-09 00:16:17.032375] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:46.496 [2024-10-09 00:16:17.032399] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:46.496 [2024-10-09 00:16:17.032416] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:46.754 #14 NEW cov: 11155 ft: 16637 corp: 6/21b lim: 4 exec/s: 14 rss: 77Mb L: 4/4 MS: 1 CrossOver- 00:08:46.754 [2024-10-09 00:16:17.232902] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:46.754 [2024-10-09 00:16:17.232926] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:46.754 [2024-10-09 00:16:17.232944] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:46.754 #15 NEW cov: 11155 ft: 17106 corp: 7/25b lim: 4 exec/s: 15 rss: 77Mb L: 4/4 MS: 1 CopyPart- 00:08:47.013 [2024-10-09 00:16:17.421581] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:47.013 [2024-10-09 00:16:17.421603] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:47.013 [2024-10-09 00:16:17.421621] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:47.013 #22 NEW cov: 11155 ft: 17358 corp: 8/29b lim: 4 exec/s: 22 rss: 77Mb L: 4/4 MS: 2 CrossOver-InsertByte- 00:08:47.013 [2024-10-09 00:16:17.616965] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:47.013 [2024-10-09 00:16:17.616988] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:47.013 [2024-10-09 00:16:17.617006] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:47.271 #23 NEW cov: 11162 ft: 17615 corp: 9/33b lim: 4 exec/s: 23 rss: 77Mb L: 4/4 MS: 1 CMP- DE: "\377\377\377\017"- 00:08:47.271 [2024-10-09 00:16:17.805005] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:47.271 [2024-10-09 00:16:17.805027] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:47.271 [2024-10-09 00:16:17.805044] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:47.530 #24 NEW cov: 11162 ft: 17657 corp: 10/37b lim: 4 exec/s: 12 rss: 77Mb L: 4/4 MS: 1 CopyPart- 00:08:47.530 #24 DONE cov: 11162 ft: 17657 corp: 10/37b lim: 4 exec/s: 12 rss: 77Mb 00:08:47.530 ###### Recommended dictionary. ###### 00:08:47.530 "\377\377\377\017" # Uses: 0 00:08:47.530 ###### End of recommended dictionary. ###### 00:08:47.530 Done 24 runs in 2 second(s) 00:08:47.530 [2024-10-09 00:16:17.938018] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-1/domain/2: disabling controller 00:08:47.790 00:16:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-1 /var/tmp/suppress_vfio_fuzz 00:08:47.790 00:16:18 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:47.790 00:16:18 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:47.790 00:16:18 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:08:47.790 00:16:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=2 00:08:47.790 00:16:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:47.790 00:16:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:47.790 00:16:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:08:47.790 00:16:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-2 00:08:47.790 00:16:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-2/domain/1 00:08:47.790 00:16:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-2/domain/2 00:08:47.790 00:16:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-2/fuzz_vfio_json.conf 00:08:47.790 00:16:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:47.790 00:16:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:47.790 00:16:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-2 /tmp/vfio-user-2/domain/1 /tmp/vfio-user-2/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:08:47.790 00:16:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-2/domain/1%; 00:08:47.790 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-2/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:47.790 00:16:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:47.790 00:16:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:47.790 00:16:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-2/domain/1 -c /tmp/vfio-user-2/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 -Y /tmp/vfio-user-2/domain/2 -r /tmp/vfio-user-2/spdk2.sock -Z 2 00:08:47.790 [2024-10-09 00:16:18.258769] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:08:47.790 [2024-10-09 00:16:18.258841] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3894946 ] 00:08:47.790 [2024-10-09 00:16:18.338251] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.049 [2024-10-09 00:16:18.428700] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.049 INFO: Running with entropic power schedule (0xFF, 100). 00:08:48.049 INFO: Seed: 2364513963 00:08:48.049 INFO: Loaded 1 modules (381582 inline 8-bit counters): 381582 [0x2bad04c, 0x2c0a2da), 00:08:48.049 INFO: Loaded 1 PC tables (381582 PCs): 381582 [0x2c0a2e0,0x31dcbc0), 00:08:48.049 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:08:48.049 INFO: A corpus is not provided, starting from an empty corpus 00:08:48.049 #2 INITED exec/s: 0 rss: 68Mb 00:08:48.049 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:48.049 This may also happen if the target rejected all inputs we tried so far 00:08:48.308 [2024-10-09 00:16:18.705483] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-2/domain/2: enabling controller 00:08:48.308 [2024-10-09 00:16:18.746581] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:48.567 NEW_FUNC[1/672]: 0x43c578 in fuzz_vfio_user_get_region_info /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:103 00:08:48.567 NEW_FUNC[2/672]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:48.567 #29 NEW cov: 11103 ft: 11069 corp: 2/9b lim: 8 exec/s: 0 rss: 73Mb L: 8/8 MS: 2 InsertByte-InsertRepeatedBytes- 00:08:48.826 [2024-10-09 00:16:19.216741] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:48.826 #30 NEW cov: 11117 ft: 13978 corp: 3/17b lim: 8 exec/s: 0 rss: 74Mb L: 8/8 MS: 1 CopyPart- 00:08:48.826 [2024-10-09 00:16:19.390045] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:49.085 NEW_FUNC[1/1]: 0x1bc8d28 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:658 00:08:49.085 #31 NEW cov: 11134 ft: 14737 corp: 4/25b lim: 8 exec/s: 0 rss: 75Mb L: 8/8 MS: 1 CopyPart- 00:08:49.085 [2024-10-09 00:16:19.562745] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:49.085 #37 NEW cov: 11134 ft: 15202 corp: 5/33b lim: 8 exec/s: 0 rss: 75Mb L: 8/8 MS: 1 ChangeBinInt- 00:08:49.344 [2024-10-09 00:16:19.735622] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:49.344 #43 NEW cov: 11134 ft: 16320 corp: 6/41b lim: 8 exec/s: 43 rss: 75Mb L: 8/8 MS: 1 ChangeBinInt- 00:08:49.344 [2024-10-09 00:16:19.908311] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:49.603 #45 NEW cov: 11134 ft: 16671 corp: 7/49b lim: 8 exec/s: 45 rss: 75Mb L: 8/8 MS: 2 CrossOver-CrossOver- 00:08:49.603 [2024-10-09 00:16:20.067332] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:49.603 #51 NEW cov: 11134 ft: 17126 corp: 8/57b lim: 8 exec/s: 51 rss: 75Mb L: 8/8 MS: 1 ChangeBit- 00:08:49.603 [2024-10-09 00:16:20.183790] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:49.861 #57 NEW cov: 11134 ft: 17644 corp: 9/65b lim: 8 exec/s: 57 rss: 75Mb L: 8/8 MS: 1 CopyPart- 00:08:49.861 [2024-10-09 00:16:20.310312] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:49.861 #58 NEW cov: 11134 ft: 17937 corp: 10/73b lim: 8 exec/s: 58 rss: 75Mb L: 8/8 MS: 1 CMP- DE: "\376\377\377\377"- 00:08:49.861 [2024-10-09 00:16:20.436420] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:50.119 #64 NEW cov: 11141 ft: 18131 corp: 11/81b lim: 8 exec/s: 64 rss: 75Mb L: 8/8 MS: 1 CrossOver- 00:08:50.119 [2024-10-09 00:16:20.560201] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:50.119 #70 NEW cov: 11141 ft: 18190 corp: 12/89b lim: 8 exec/s: 70 rss: 75Mb L: 8/8 MS: 1 CMP- DE: "\010l\036{\265\030'\000"- 00:08:50.119 [2024-10-09 00:16:20.684282] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:50.378 #71 NEW cov: 11141 ft: 18234 corp: 13/97b lim: 8 exec/s: 35 rss: 75Mb L: 8/8 MS: 1 CrossOver- 00:08:50.378 #71 DONE cov: 11141 ft: 18234 corp: 13/97b lim: 8 exec/s: 35 rss: 75Mb 00:08:50.378 ###### Recommended dictionary. ###### 00:08:50.378 "\376\377\377\377" # Uses: 0 00:08:50.378 "\010l\036{\265\030'\000" # Uses: 0 00:08:50.378 ###### End of recommended dictionary. ###### 00:08:50.378 Done 71 runs in 2 second(s) 00:08:50.378 [2024-10-09 00:16:20.775021] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-2/domain/2: disabling controller 00:08:50.636 00:16:21 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-2 /var/tmp/suppress_vfio_fuzz 00:08:50.636 00:16:21 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:50.636 00:16:21 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:50.636 00:16:21 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:08:50.636 00:16:21 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=3 00:08:50.636 00:16:21 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:50.636 00:16:21 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:50.636 00:16:21 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:08:50.636 00:16:21 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-3 00:08:50.636 00:16:21 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-3/domain/1 00:08:50.636 00:16:21 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-3/domain/2 00:08:50.636 00:16:21 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-3/fuzz_vfio_json.conf 00:08:50.636 00:16:21 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:50.636 00:16:21 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:50.636 00:16:21 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-3 /tmp/vfio-user-3/domain/1 /tmp/vfio-user-3/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:08:50.636 00:16:21 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-3/domain/1%; 00:08:50.636 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-3/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:50.636 00:16:21 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:50.636 00:16:21 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:50.636 00:16:21 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-3/domain/1 -c /tmp/vfio-user-3/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 -Y /tmp/vfio-user-3/domain/2 -r /tmp/vfio-user-3/spdk3.sock -Z 3 00:08:50.636 [2024-10-09 00:16:21.096363] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:08:50.636 [2024-10-09 00:16:21.096430] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3895356 ] 00:08:50.636 [2024-10-09 00:16:21.178928] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.636 [2024-10-09 00:16:21.261029] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.894 INFO: Running with entropic power schedule (0xFF, 100). 00:08:50.894 INFO: Seed: 871542188 00:08:50.894 INFO: Loaded 1 modules (381582 inline 8-bit counters): 381582 [0x2bad04c, 0x2c0a2da), 00:08:50.894 INFO: Loaded 1 PC tables (381582 PCs): 381582 [0x2c0a2e0,0x31dcbc0), 00:08:50.894 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:08:50.894 INFO: A corpus is not provided, starting from an empty corpus 00:08:50.894 #2 INITED exec/s: 0 rss: 68Mb 00:08:50.894 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:50.895 This may also happen if the target rejected all inputs we tried so far 00:08:50.895 [2024-10-09 00:16:21.510492] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-3/domain/2: enabling controller 00:08:51.411 NEW_FUNC[1/672]: 0x43cc68 in fuzz_vfio_user_dma_map /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:124 00:08:51.411 NEW_FUNC[2/672]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:51.411 #78 NEW cov: 11110 ft: 10998 corp: 2/33b lim: 32 exec/s: 0 rss: 73Mb L: 32/32 MS: 1 InsertRepeatedBytes- 00:08:51.669 #84 NEW cov: 11125 ft: 13709 corp: 3/65b lim: 32 exec/s: 0 rss: 74Mb L: 32/32 MS: 1 ChangeBit- 00:08:51.934 NEW_FUNC[1/1]: 0x1bc8d28 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:658 00:08:51.934 #100 NEW cov: 11142 ft: 15697 corp: 4/97b lim: 32 exec/s: 0 rss: 75Mb L: 32/32 MS: 1 ChangeBinInt- 00:08:51.934 #101 NEW cov: 11142 ft: 16495 corp: 5/129b lim: 32 exec/s: 101 rss: 75Mb L: 32/32 MS: 1 ChangeBit- 00:08:52.197 #102 NEW cov: 11142 ft: 17109 corp: 6/161b lim: 32 exec/s: 102 rss: 75Mb L: 32/32 MS: 1 ChangeBinInt- 00:08:52.455 #103 NEW cov: 11142 ft: 17240 corp: 7/193b lim: 32 exec/s: 103 rss: 75Mb L: 32/32 MS: 1 ChangeByte- 00:08:52.455 #104 NEW cov: 11142 ft: 17375 corp: 8/225b lim: 32 exec/s: 104 rss: 75Mb L: 32/32 MS: 1 ChangeByte- 00:08:52.712 #105 NEW cov: 11149 ft: 17640 corp: 9/257b lim: 32 exec/s: 105 rss: 75Mb L: 32/32 MS: 1 ChangeBit- 00:08:52.970 #106 NEW cov: 11149 ft: 18038 corp: 10/289b lim: 32 exec/s: 106 rss: 75Mb L: 32/32 MS: 1 ShuffleBytes- 00:08:53.228 #107 NEW cov: 11149 ft: 18066 corp: 11/321b lim: 32 exec/s: 53 rss: 75Mb L: 32/32 MS: 1 ChangeASCIIInt- 00:08:53.228 #107 DONE cov: 11149 ft: 18066 corp: 11/321b lim: 32 exec/s: 53 rss: 75Mb 00:08:53.228 Done 107 runs in 2 second(s) 00:08:53.228 [2024-10-09 00:16:23.677016] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-3/domain/2: disabling controller 00:08:53.486 00:16:23 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-3 /var/tmp/suppress_vfio_fuzz 00:08:53.486 00:16:23 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:53.486 00:16:23 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:53.486 00:16:23 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:08:53.486 00:16:23 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=4 00:08:53.486 00:16:23 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:53.486 00:16:23 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:53.486 00:16:23 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:08:53.486 00:16:23 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-4 00:08:53.486 00:16:23 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-4/domain/1 00:08:53.486 00:16:23 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-4/domain/2 00:08:53.486 00:16:23 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-4/fuzz_vfio_json.conf 00:08:53.486 00:16:23 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:53.486 00:16:23 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:53.486 00:16:23 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-4 /tmp/vfio-user-4/domain/1 /tmp/vfio-user-4/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:08:53.486 00:16:23 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-4/domain/1%; 00:08:53.486 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-4/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:53.486 00:16:23 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:53.486 00:16:23 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:53.486 00:16:23 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-4/domain/1 -c /tmp/vfio-user-4/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 -Y /tmp/vfio-user-4/domain/2 -r /tmp/vfio-user-4/spdk4.sock -Z 4 00:08:53.486 [2024-10-09 00:16:23.984296] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:08:53.487 [2024-10-09 00:16:23.984371] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3895819 ] 00:08:53.487 [2024-10-09 00:16:24.062663] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.746 [2024-10-09 00:16:24.146045] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.746 INFO: Running with entropic power schedule (0xFF, 100). 00:08:53.746 INFO: Seed: 3774547305 00:08:53.746 INFO: Loaded 1 modules (381582 inline 8-bit counters): 381582 [0x2bad04c, 0x2c0a2da), 00:08:53.746 INFO: Loaded 1 PC tables (381582 PCs): 381582 [0x2c0a2e0,0x31dcbc0), 00:08:53.746 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:08:53.746 INFO: A corpus is not provided, starting from an empty corpus 00:08:53.746 #2 INITED exec/s: 0 rss: 68Mb 00:08:53.746 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:53.746 This may also happen if the target rejected all inputs we tried so far 00:08:54.004 [2024-10-09 00:16:24.404249] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-4/domain/2: enabling controller 00:08:54.004 [2024-10-09 00:16:24.454896] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to memory map DMA region [0xa00, 0xa00) fd=327 offset=0xa00000000000000 prot=0x3: Invalid argument 00:08:54.004 [2024-10-09 00:16:24.454922] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0xa00, 0xa00) offset=0xa00000000000000 flags=0x3: Invalid argument 00:08:54.004 [2024-10-09 00:16:24.454934] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: Invalid argument 00:08:54.004 [2024-10-09 00:16:24.454952] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:08:54.004 [2024-10-09 00:16:24.455864] vfio_user.c:3104:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0xa00, 0xa00) flags=0: No such file or directory 00:08:54.004 [2024-10-09 00:16:24.455878] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:08:54.004 [2024-10-09 00:16:24.455894] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:08:54.265 NEW_FUNC[1/673]: 0x43d4e8 in fuzz_vfio_user_dma_unmap /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:144 00:08:54.265 NEW_FUNC[2/673]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:54.265 #166 NEW cov: 11123 ft: 11081 corp: 2/33b lim: 32 exec/s: 0 rss: 74Mb L: 32/32 MS: 4 CrossOver-InsertRepeatedBytes-InsertByte-CrossOver- 00:08:54.525 [2024-10-09 00:16:24.916199] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: DMA region size 13889313456554770431 > max 8796093022208 00:08:54.525 [2024-10-09 00:16:24.916236] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0xffc028c0c0c0c0c0, 0xc080e9c0c0c0c0bf) offset=0x31c0c0c0c0c0c0c0 flags=0x3: No space left on device 00:08:54.525 [2024-10-09 00:16:24.916249] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: No space left on device 00:08:54.525 [2024-10-09 00:16:24.916267] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:08:54.525 [2024-10-09 00:16:24.917237] vfio_user.c:3104:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0xffc028c0c0c0c0c0, 0xc080e9c0c0c0c0bf) flags=0: No such file or directory 00:08:54.525 [2024-10-09 00:16:24.917257] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:08:54.525 [2024-10-09 00:16:24.917274] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:08:54.525 #176 NEW cov: 11140 ft: 14271 corp: 3/65b lim: 32 exec/s: 0 rss: 75Mb L: 32/32 MS: 5 InsertByte-InsertRepeatedBytes-InsertRepeatedBytes-InsertByte-InsertRepeatedBytes- 00:08:54.784 NEW_FUNC[1/1]: 0x1bc8d28 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:658 00:08:54.784 #177 NEW cov: 11161 ft: 14689 corp: 4/97b lim: 32 exec/s: 0 rss: 75Mb L: 32/32 MS: 1 ChangeBinInt- 00:08:54.784 [2024-10-09 00:16:25.265654] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to memory map DMA region [0x31000a00, 0x31000a00) fd=329 offset=0xa00000000000000 prot=0x3: Invalid argument 00:08:54.784 [2024-10-09 00:16:25.265682] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0x31000a00, 0x31000a00) offset=0xa00000000000000 flags=0x3: Invalid argument 00:08:54.784 [2024-10-09 00:16:25.265694] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: Invalid argument 00:08:54.784 [2024-10-09 00:16:25.265711] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:08:54.784 [2024-10-09 00:16:25.266685] vfio_user.c:3104:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0x31000a00, 0x31000a00) flags=0: No such file or directory 00:08:54.784 [2024-10-09 00:16:25.266706] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:08:54.784 [2024-10-09 00:16:25.266723] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:08:54.784 #183 NEW cov: 11161 ft: 15006 corp: 5/129b lim: 32 exec/s: 0 rss: 75Mb L: 32/32 MS: 1 ChangeByte- 00:08:55.041 #186 NEW cov: 11161 ft: 15071 corp: 6/161b lim: 32 exec/s: 186 rss: 76Mb L: 32/32 MS: 3 InsertRepeatedBytes-InsertRepeatedBytes-InsertRepeatedBytes- 00:08:55.299 #187 NEW cov: 11161 ft: 16076 corp: 7/193b lim: 32 exec/s: 187 rss: 76Mb L: 32/32 MS: 1 ChangeByte- 00:08:55.299 #193 NEW cov: 11161 ft: 16181 corp: 8/225b lim: 32 exec/s: 193 rss: 76Mb L: 32/32 MS: 1 CopyPart- 00:08:55.557 [2024-10-09 00:16:25.938835] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to memory map DMA region [0xa0000000a00, 0xa0000000a00) fd=329 offset=0xa00000000000000 prot=0x3: Invalid argument 00:08:55.557 [2024-10-09 00:16:25.938863] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0xa0000000a00, 0xa0000000a00) offset=0xa00000000000000 flags=0x3: Invalid argument 00:08:55.557 [2024-10-09 00:16:25.938875] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: Invalid argument 00:08:55.557 [2024-10-09 00:16:25.938892] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:08:55.557 [2024-10-09 00:16:25.939865] vfio_user.c:3104:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0xa0000000a00, 0xa0000000a00) flags=0: No such file or directory 00:08:55.557 [2024-10-09 00:16:25.939885] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:08:55.557 [2024-10-09 00:16:25.939903] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:08:55.557 #199 NEW cov: 11161 ft: 16306 corp: 9/257b lim: 32 exec/s: 199 rss: 76Mb L: 32/32 MS: 1 CopyPart- 00:08:55.558 [2024-10-09 00:16:26.109028] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to memory map DMA region [0xa00, 0xa00) fd=329 offset=0xa00000000000000 prot=0x3: Invalid argument 00:08:55.558 [2024-10-09 00:16:26.109053] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0xa00, 0xa00) offset=0xa00000000000000 flags=0x3: Invalid argument 00:08:55.558 [2024-10-09 00:16:26.109065] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: Invalid argument 00:08:55.558 [2024-10-09 00:16:26.109083] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:08:55.558 [2024-10-09 00:16:26.110041] vfio_user.c:3104:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0xa00, 0xa00) flags=0: No such file or directory 00:08:55.558 [2024-10-09 00:16:26.110062] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:08:55.558 [2024-10-09 00:16:26.110079] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:08:55.816 #200 NEW cov: 11168 ft: 16423 corp: 10/289b lim: 32 exec/s: 200 rss: 76Mb L: 32/32 MS: 1 CopyPart- 00:08:55.816 #211 NEW cov: 11168 ft: 16565 corp: 11/321b lim: 32 exec/s: 211 rss: 76Mb L: 32/32 MS: 1 ChangeBit- 00:08:55.816 [2024-10-09 00:16:26.443473] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to memory map DMA region [0xa0000000a00, 0xa0000000a00) fd=329 offset=0xa00000000000000 prot=0x3: Invalid argument 00:08:55.816 [2024-10-09 00:16:26.443499] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0xa0000000a00, 0xa0000000a00) offset=0xa00000000000000 flags=0x3: Invalid argument 00:08:55.816 [2024-10-09 00:16:26.443510] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: Invalid argument 00:08:55.816 [2024-10-09 00:16:26.443527] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:08:55.816 [2024-10-09 00:16:26.444508] vfio_user.c:3104:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0xa0000000a00, 0xa0000000a00) flags=0: No such file or directory 00:08:55.816 [2024-10-09 00:16:26.444528] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:08:55.816 [2024-10-09 00:16:26.444545] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:08:56.075 #217 NEW cov: 11168 ft: 16651 corp: 12/353b lim: 32 exec/s: 108 rss: 76Mb L: 32/32 MS: 1 ShuffleBytes- 00:08:56.075 #217 DONE cov: 11168 ft: 16651 corp: 12/353b lim: 32 exec/s: 108 rss: 76Mb 00:08:56.075 Done 217 runs in 2 second(s) 00:08:56.075 [2024-10-09 00:16:26.568023] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-4/domain/2: disabling controller 00:08:56.334 00:16:26 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-4 /var/tmp/suppress_vfio_fuzz 00:08:56.334 00:16:26 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:56.334 00:16:26 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:56.334 00:16:26 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:08:56.334 00:16:26 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=5 00:08:56.334 00:16:26 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:56.334 00:16:26 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:56.334 00:16:26 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:08:56.334 00:16:26 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-5 00:08:56.334 00:16:26 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-5/domain/1 00:08:56.334 00:16:26 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-5/domain/2 00:08:56.334 00:16:26 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-5/fuzz_vfio_json.conf 00:08:56.334 00:16:26 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:56.334 00:16:26 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:56.334 00:16:26 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-5 /tmp/vfio-user-5/domain/1 /tmp/vfio-user-5/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:08:56.334 00:16:26 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-5/domain/1%; 00:08:56.334 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-5/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:56.334 00:16:26 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:56.334 00:16:26 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:56.334 00:16:26 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-5/domain/1 -c /tmp/vfio-user-5/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 -Y /tmp/vfio-user-5/domain/2 -r /tmp/vfio-user-5/spdk5.sock -Z 5 00:08:56.334 [2024-10-09 00:16:26.883715] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:08:56.334 [2024-10-09 00:16:26.883801] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3896194 ] 00:08:56.334 [2024-10-09 00:16:26.960276] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.593 [2024-10-09 00:16:27.044827] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.864 INFO: Running with entropic power schedule (0xFF, 100). 00:08:56.864 INFO: Seed: 2374578747 00:08:56.864 INFO: Loaded 1 modules (381582 inline 8-bit counters): 381582 [0x2bad04c, 0x2c0a2da), 00:08:56.864 INFO: Loaded 1 PC tables (381582 PCs): 381582 [0x2c0a2e0,0x31dcbc0), 00:08:56.864 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:08:56.864 INFO: A corpus is not provided, starting from an empty corpus 00:08:56.864 #2 INITED exec/s: 0 rss: 68Mb 00:08:56.864 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:56.864 This may also happen if the target rejected all inputs we tried so far 00:08:56.864 [2024-10-09 00:16:27.298729] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-5/domain/2: enabling controller 00:08:56.864 [2024-10-09 00:16:27.347850] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:56.864 [2024-10-09 00:16:27.347898] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:57.121 NEW_FUNC[1/673]: 0x43dee8 in fuzz_vfio_user_irq_set /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:171 00:08:57.121 NEW_FUNC[2/673]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:57.121 #77 NEW cov: 11122 ft: 11087 corp: 2/14b lim: 13 exec/s: 0 rss: 74Mb L: 13/13 MS: 5 InsertRepeatedBytes-CrossOver-ShuffleBytes-EraseBytes-InsertRepeatedBytes- 00:08:57.379 [2024-10-09 00:16:27.812358] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:57.379 [2024-10-09 00:16:27.812402] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:57.379 #83 NEW cov: 11137 ft: 14442 corp: 3/27b lim: 13 exec/s: 0 rss: 75Mb L: 13/13 MS: 1 ChangeBit- 00:08:57.379 [2024-10-09 00:16:27.999220] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:57.379 [2024-10-09 00:16:27.999251] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:57.638 NEW_FUNC[1/1]: 0x1bc8d28 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:658 00:08:57.638 #84 NEW cov: 11157 ft: 15029 corp: 4/40b lim: 13 exec/s: 0 rss: 76Mb L: 13/13 MS: 1 CopyPart- 00:08:57.638 [2024-10-09 00:16:28.175849] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:57.638 [2024-10-09 00:16:28.175889] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:57.896 #90 NEW cov: 11157 ft: 15985 corp: 5/53b lim: 13 exec/s: 90 rss: 76Mb L: 13/13 MS: 1 CrossOver- 00:08:57.896 [2024-10-09 00:16:28.360411] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:57.896 [2024-10-09 00:16:28.360441] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:57.896 #98 NEW cov: 11157 ft: 16428 corp: 6/66b lim: 13 exec/s: 98 rss: 76Mb L: 13/13 MS: 3 EraseBytes-ChangeBit-InsertRepeatedBytes- 00:08:58.155 [2024-10-09 00:16:28.536148] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:58.155 [2024-10-09 00:16:28.536179] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:58.155 #99 NEW cov: 11157 ft: 16829 corp: 7/79b lim: 13 exec/s: 99 rss: 76Mb L: 13/13 MS: 1 CopyPart- 00:08:58.155 [2024-10-09 00:16:28.711994] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:58.155 [2024-10-09 00:16:28.712023] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:58.413 #100 NEW cov: 11157 ft: 17327 corp: 8/92b lim: 13 exec/s: 100 rss: 76Mb L: 13/13 MS: 1 ChangeByte- 00:08:58.413 [2024-10-09 00:16:28.885754] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:58.413 [2024-10-09 00:16:28.885785] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:58.413 #101 NEW cov: 11157 ft: 17524 corp: 9/105b lim: 13 exec/s: 101 rss: 76Mb L: 13/13 MS: 1 ChangeASCIIInt- 00:08:58.671 [2024-10-09 00:16:29.059283] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:58.671 [2024-10-09 00:16:29.059316] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:58.671 #107 NEW cov: 11164 ft: 17622 corp: 10/118b lim: 13 exec/s: 107 rss: 76Mb L: 13/13 MS: 1 ChangeByte- 00:08:58.671 [2024-10-09 00:16:29.235049] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:58.671 [2024-10-09 00:16:29.235080] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:58.930 #108 NEW cov: 11164 ft: 17659 corp: 11/131b lim: 13 exec/s: 54 rss: 76Mb L: 13/13 MS: 1 ChangeASCIIInt- 00:08:58.930 #108 DONE cov: 11164 ft: 17659 corp: 11/131b lim: 13 exec/s: 54 rss: 76Mb 00:08:58.930 Done 108 runs in 2 second(s) 00:08:58.930 [2024-10-09 00:16:29.356018] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-5/domain/2: disabling controller 00:08:59.205 00:16:29 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-5 /var/tmp/suppress_vfio_fuzz 00:08:59.205 00:16:29 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:59.205 00:16:29 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:59.205 00:16:29 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:08:59.205 00:16:29 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=6 00:08:59.205 00:16:29 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:59.205 00:16:29 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:59.205 00:16:29 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:08:59.205 00:16:29 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-6 00:08:59.205 00:16:29 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-6/domain/1 00:08:59.205 00:16:29 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-6/domain/2 00:08:59.205 00:16:29 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-6/fuzz_vfio_json.conf 00:08:59.205 00:16:29 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:59.205 00:16:29 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:59.205 00:16:29 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-6 /tmp/vfio-user-6/domain/1 /tmp/vfio-user-6/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:08:59.205 00:16:29 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-6/domain/1%; 00:08:59.205 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-6/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:59.205 00:16:29 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:59.205 00:16:29 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:59.205 00:16:29 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-6/domain/1 -c /tmp/vfio-user-6/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 -Y /tmp/vfio-user-6/domain/2 -r /tmp/vfio-user-6/spdk6.sock -Z 6 00:08:59.205 [2024-10-09 00:16:29.681474] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:08:59.205 [2024-10-09 00:16:29.681540] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3896551 ] 00:08:59.205 [2024-10-09 00:16:29.758388] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:59.464 [2024-10-09 00:16:29.844736] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.464 INFO: Running with entropic power schedule (0xFF, 100). 00:08:59.464 INFO: Seed: 879622259 00:08:59.464 INFO: Loaded 1 modules (381582 inline 8-bit counters): 381582 [0x2bad04c, 0x2c0a2da), 00:08:59.464 INFO: Loaded 1 PC tables (381582 PCs): 381582 [0x2c0a2e0,0x31dcbc0), 00:08:59.464 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:08:59.464 INFO: A corpus is not provided, starting from an empty corpus 00:08:59.464 #2 INITED exec/s: 0 rss: 68Mb 00:08:59.464 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:59.464 This may also happen if the target rejected all inputs we tried so far 00:08:59.722 [2024-10-09 00:16:30.098891] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-6/domain/2: enabling controller 00:08:59.722 [2024-10-09 00:16:30.168043] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:59.722 [2024-10-09 00:16:30.168075] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:59.989 NEW_FUNC[1/673]: 0x43ebd8 in fuzz_vfio_user_set_msix /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:190 00:08:59.989 NEW_FUNC[2/673]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:59.989 #25 NEW cov: 11118 ft: 11018 corp: 2/10b lim: 9 exec/s: 0 rss: 74Mb L: 9/9 MS: 3 ChangeByte-CrossOver-InsertRepeatedBytes- 00:09:00.249 [2024-10-09 00:16:30.671194] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:00.249 [2024-10-09 00:16:30.671238] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:00.249 #26 NEW cov: 11132 ft: 13872 corp: 3/19b lim: 9 exec/s: 0 rss: 75Mb L: 9/9 MS: 1 ChangeByte- 00:09:00.249 [2024-10-09 00:16:30.859690] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:00.249 [2024-10-09 00:16:30.859724] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:00.507 NEW_FUNC[1/1]: 0x1bc8d28 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:658 00:09:00.507 #32 NEW cov: 11149 ft: 15451 corp: 4/28b lim: 9 exec/s: 0 rss: 76Mb L: 9/9 MS: 1 ChangeByte- 00:09:00.507 [2024-10-09 00:16:31.060089] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:00.507 [2024-10-09 00:16:31.060119] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:00.765 #38 NEW cov: 11149 ft: 16121 corp: 5/37b lim: 9 exec/s: 38 rss: 76Mb L: 9/9 MS: 1 ChangeBinInt- 00:09:00.765 [2024-10-09 00:16:31.254402] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:00.765 [2024-10-09 00:16:31.254432] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:00.765 #39 NEW cov: 11149 ft: 17064 corp: 6/46b lim: 9 exec/s: 39 rss: 76Mb L: 9/9 MS: 1 ShuffleBytes- 00:09:01.023 [2024-10-09 00:16:31.460709] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:01.023 [2024-10-09 00:16:31.460739] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:01.023 #40 NEW cov: 11149 ft: 17114 corp: 7/55b lim: 9 exec/s: 40 rss: 76Mb L: 9/9 MS: 1 ChangeBit- 00:09:01.023 [2024-10-09 00:16:31.650906] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:01.023 [2024-10-09 00:16:31.650937] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:01.281 #41 NEW cov: 11149 ft: 17350 corp: 8/64b lim: 9 exec/s: 41 rss: 76Mb L: 9/9 MS: 1 CrossOver- 00:09:01.281 [2024-10-09 00:16:31.840228] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:01.281 [2024-10-09 00:16:31.840259] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:01.539 #42 NEW cov: 11156 ft: 17741 corp: 9/73b lim: 9 exec/s: 42 rss: 77Mb L: 9/9 MS: 1 ChangeBinInt- 00:09:01.539 [2024-10-09 00:16:32.034197] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:01.539 [2024-10-09 00:16:32.034230] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:01.539 #43 NEW cov: 11156 ft: 17952 corp: 10/82b lim: 9 exec/s: 21 rss: 77Mb L: 9/9 MS: 1 CrossOver- 00:09:01.539 #43 DONE cov: 11156 ft: 17952 corp: 10/82b lim: 9 exec/s: 21 rss: 77Mb 00:09:01.539 Done 43 runs in 2 second(s) 00:09:01.539 [2024-10-09 00:16:32.171030] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-6/domain/2: disabling controller 00:09:02.106 00:16:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-6 /var/tmp/suppress_vfio_fuzz 00:09:02.106 00:16:32 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:02.106 00:16:32 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:02.106 00:16:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:09:02.106 00:09:02.106 real 0m20.310s 00:09:02.106 user 0m27.756s 00:09:02.106 sys 0m2.059s 00:09:02.106 00:16:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:02.106 00:16:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:09:02.106 ************************************ 00:09:02.106 END TEST vfio_llvm_fuzz 00:09:02.106 ************************************ 00:09:02.106 00:09:02.106 real 1m26.389s 00:09:02.106 user 2m9.501s 00:09:02.106 sys 0m10.050s 00:09:02.106 00:16:32 llvm_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:02.106 00:16:32 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:09:02.106 ************************************ 00:09:02.106 END TEST llvm_fuzz 00:09:02.106 ************************************ 00:09:02.106 00:16:32 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:09:02.106 00:16:32 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:09:02.106 00:16:32 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:09:02.106 00:16:32 -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:02.106 00:16:32 -- common/autotest_common.sh@10 -- # set +x 00:09:02.106 00:16:32 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:09:02.106 00:16:32 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:09:02.106 00:16:32 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:09:02.106 00:16:32 -- common/autotest_common.sh@10 -- # set +x 00:09:07.465 INFO: APP EXITING 00:09:07.465 INFO: killing all VMs 00:09:07.465 INFO: killing vhost app 00:09:07.465 INFO: EXIT DONE 00:09:09.999 Waiting for block devices as requested 00:09:09.999 0000:1a:00.0 (8086 0a54): vfio-pci -> nvme 00:09:10.258 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:09:10.258 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:09:10.517 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:09:10.517 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:09:10.517 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:09:10.517 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:09:10.776 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:09:10.776 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:09:10.776 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:09:11.035 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:09:11.035 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:09:11.035 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:09:11.294 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:09:11.294 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:09:11.294 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:09:11.552 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:09:16.816 Cleaning 00:09:16.816 Removing: /dev/shm/spdk_tgt_trace.pid3872854 00:09:16.816 Removing: /var/run/dpdk/spdk_pid3870364 00:09:16.816 Removing: /var/run/dpdk/spdk_pid3871598 00:09:16.816 Removing: /var/run/dpdk/spdk_pid3872854 00:09:16.816 Removing: /var/run/dpdk/spdk_pid3873388 00:09:16.816 Removing: /var/run/dpdk/spdk_pid3874124 00:09:16.816 Removing: /var/run/dpdk/spdk_pid3874309 00:09:16.816 Removing: /var/run/dpdk/spdk_pid3875124 00:09:16.816 Removing: /var/run/dpdk/spdk_pid3875244 00:09:16.816 Removing: /var/run/dpdk/spdk_pid3875591 00:09:16.816 Removing: /var/run/dpdk/spdk_pid3875966 00:09:16.816 Removing: /var/run/dpdk/spdk_pid3876242 00:09:16.816 Removing: /var/run/dpdk/spdk_pid3876498 00:09:16.816 Removing: /var/run/dpdk/spdk_pid3876897 00:09:16.816 Removing: /var/run/dpdk/spdk_pid3877095 00:09:16.816 Removing: /var/run/dpdk/spdk_pid3877287 00:09:16.816 Removing: /var/run/dpdk/spdk_pid3877533 00:09:16.816 Removing: /var/run/dpdk/spdk_pid3878295 00:09:16.816 Removing: /var/run/dpdk/spdk_pid3880807 00:09:16.816 Removing: /var/run/dpdk/spdk_pid3881023 00:09:16.816 Removing: /var/run/dpdk/spdk_pid3881391 00:09:16.816 Removing: /var/run/dpdk/spdk_pid3881410 00:09:16.816 Removing: /var/run/dpdk/spdk_pid3881962 00:09:16.816 Removing: /var/run/dpdk/spdk_pid3882006 00:09:16.816 Removing: /var/run/dpdk/spdk_pid3882526 00:09:16.816 Removing: /var/run/dpdk/spdk_pid3882612 00:09:16.816 Removing: /var/run/dpdk/spdk_pid3882915 00:09:16.817 Removing: /var/run/dpdk/spdk_pid3882955 00:09:16.817 Removing: /var/run/dpdk/spdk_pid3883140 00:09:16.817 Removing: /var/run/dpdk/spdk_pid3883314 00:09:16.817 Removing: /var/run/dpdk/spdk_pid3883820 00:09:16.817 Removing: /var/run/dpdk/spdk_pid3884087 00:09:16.817 Removing: /var/run/dpdk/spdk_pid3884287 00:09:16.817 Removing: /var/run/dpdk/spdk_pid3884666 00:09:16.817 Removing: /var/run/dpdk/spdk_pid3885346 00:09:16.817 Removing: /var/run/dpdk/spdk_pid3885736 00:09:16.817 Removing: /var/run/dpdk/spdk_pid3886086 00:09:16.817 Removing: /var/run/dpdk/spdk_pid3886472 00:09:16.817 Removing: /var/run/dpdk/spdk_pid3886778 00:09:16.817 Removing: /var/run/dpdk/spdk_pid3887132 00:09:16.817 Removing: /var/run/dpdk/spdk_pid3887483 00:09:16.817 Removing: /var/run/dpdk/spdk_pid3887851 00:09:16.817 Removing: /var/run/dpdk/spdk_pid3888204 00:09:16.817 Removing: /var/run/dpdk/spdk_pid3888570 00:09:16.817 Removing: /var/run/dpdk/spdk_pid3888925 00:09:16.817 Removing: /var/run/dpdk/spdk_pid3889286 00:09:16.817 Removing: /var/run/dpdk/spdk_pid3889645 00:09:16.817 Removing: /var/run/dpdk/spdk_pid3889997 00:09:16.817 Removing: /var/run/dpdk/spdk_pid3890349 00:09:16.817 Removing: /var/run/dpdk/spdk_pid3890641 00:09:16.817 Removing: /var/run/dpdk/spdk_pid3890971 00:09:16.817 Removing: /var/run/dpdk/spdk_pid3891270 00:09:16.817 Removing: /var/run/dpdk/spdk_pid3891629 00:09:16.817 Removing: /var/run/dpdk/spdk_pid3891985 00:09:16.817 Removing: /var/run/dpdk/spdk_pid3892347 00:09:16.817 Removing: /var/run/dpdk/spdk_pid3892703 00:09:16.817 Removing: /var/run/dpdk/spdk_pid3893062 00:09:16.817 Removing: /var/run/dpdk/spdk_pid3893418 00:09:16.817 Removing: /var/run/dpdk/spdk_pid3893777 00:09:16.817 Removing: /var/run/dpdk/spdk_pid3894229 00:09:16.817 Removing: /var/run/dpdk/spdk_pid3894591 00:09:16.817 Removing: /var/run/dpdk/spdk_pid3894946 00:09:16.817 Removing: /var/run/dpdk/spdk_pid3895356 00:09:16.817 Removing: /var/run/dpdk/spdk_pid3895819 00:09:16.817 Removing: /var/run/dpdk/spdk_pid3896194 00:09:16.817 Removing: /var/run/dpdk/spdk_pid3896551 00:09:16.817 Clean 00:09:16.817 00:16:47 -- common/autotest_common.sh@1451 -- # return 0 00:09:16.817 00:16:47 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:09:16.817 00:16:47 -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:16.817 00:16:47 -- common/autotest_common.sh@10 -- # set +x 00:09:16.817 00:16:47 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:09:16.817 00:16:47 -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:16.817 00:16:47 -- common/autotest_common.sh@10 -- # set +x 00:09:16.817 00:16:47 -- spdk/autotest.sh@388 -- # chmod a+r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:09:16.817 00:16:47 -- spdk/autotest.sh@390 -- # [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log ]] 00:09:16.817 00:16:47 -- spdk/autotest.sh@390 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log 00:09:16.817 00:16:47 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:09:17.074 00:16:47 -- spdk/autotest.sh@394 -- # hostname 00:09:17.074 00:16:47 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -c --no-external -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk -t spdk-wfp-39 -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_test.info 00:09:17.074 geninfo: WARNING: invalid characters removed from testname! 00:09:20.393 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/mdns_server.gcda 00:09:25.674 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_stubs.gcda 00:09:29.867 00:16:59 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -a /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:09:37.985 00:17:07 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:09:42.174 00:17:12 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:09:47.458 00:17:18 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:09:52.726 00:17:23 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:09:59.305 00:17:28 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:10:03.502 00:17:33 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:10:03.502 00:17:34 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:10:03.502 00:17:34 -- common/autotest_common.sh@1681 -- $ lcov --version 00:10:03.502 00:17:34 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:10:03.502 00:17:34 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:10:03.502 00:17:34 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:10:03.502 00:17:34 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:10:03.502 00:17:34 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:10:03.502 00:17:34 -- scripts/common.sh@336 -- $ IFS=.-: 00:10:03.502 00:17:34 -- scripts/common.sh@336 -- $ read -ra ver1 00:10:03.502 00:17:34 -- scripts/common.sh@337 -- $ IFS=.-: 00:10:03.502 00:17:34 -- scripts/common.sh@337 -- $ read -ra ver2 00:10:03.502 00:17:34 -- scripts/common.sh@338 -- $ local 'op=<' 00:10:03.502 00:17:34 -- scripts/common.sh@340 -- $ ver1_l=2 00:10:03.502 00:17:34 -- scripts/common.sh@341 -- $ ver2_l=1 00:10:03.502 00:17:34 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:10:03.502 00:17:34 -- scripts/common.sh@344 -- $ case "$op" in 00:10:03.503 00:17:34 -- scripts/common.sh@345 -- $ : 1 00:10:03.503 00:17:34 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:10:03.503 00:17:34 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:03.503 00:17:34 -- scripts/common.sh@365 -- $ decimal 1 00:10:03.503 00:17:34 -- scripts/common.sh@353 -- $ local d=1 00:10:03.503 00:17:34 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:10:03.503 00:17:34 -- scripts/common.sh@355 -- $ echo 1 00:10:03.503 00:17:34 -- scripts/common.sh@365 -- $ ver1[v]=1 00:10:03.503 00:17:34 -- scripts/common.sh@366 -- $ decimal 2 00:10:03.503 00:17:34 -- scripts/common.sh@353 -- $ local d=2 00:10:03.503 00:17:34 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:10:03.503 00:17:34 -- scripts/common.sh@355 -- $ echo 2 00:10:03.503 00:17:34 -- scripts/common.sh@366 -- $ ver2[v]=2 00:10:03.503 00:17:34 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:10:03.503 00:17:34 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:10:03.503 00:17:34 -- scripts/common.sh@368 -- $ return 0 00:10:03.503 00:17:34 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:03.503 00:17:34 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:10:03.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.503 --rc genhtml_branch_coverage=1 00:10:03.503 --rc genhtml_function_coverage=1 00:10:03.503 --rc genhtml_legend=1 00:10:03.503 --rc geninfo_all_blocks=1 00:10:03.503 --rc geninfo_unexecuted_blocks=1 00:10:03.503 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:03.503 ' 00:10:03.503 00:17:34 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:10:03.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.503 --rc genhtml_branch_coverage=1 00:10:03.503 --rc genhtml_function_coverage=1 00:10:03.503 --rc genhtml_legend=1 00:10:03.503 --rc geninfo_all_blocks=1 00:10:03.503 --rc geninfo_unexecuted_blocks=1 00:10:03.503 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:03.503 ' 00:10:03.503 00:17:34 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:10:03.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.503 --rc genhtml_branch_coverage=1 00:10:03.503 --rc genhtml_function_coverage=1 00:10:03.503 --rc genhtml_legend=1 00:10:03.503 --rc geninfo_all_blocks=1 00:10:03.503 --rc geninfo_unexecuted_blocks=1 00:10:03.503 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:03.503 ' 00:10:03.503 00:17:34 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:10:03.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.503 --rc genhtml_branch_coverage=1 00:10:03.503 --rc genhtml_function_coverage=1 00:10:03.503 --rc genhtml_legend=1 00:10:03.503 --rc geninfo_all_blocks=1 00:10:03.503 --rc geninfo_unexecuted_blocks=1 00:10:03.503 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:03.503 ' 00:10:03.503 00:17:34 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:10:03.503 00:17:34 -- scripts/common.sh@15 -- $ shopt -s extglob 00:10:03.503 00:17:34 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:10:03.503 00:17:34 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:03.503 00:17:34 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:03.503 00:17:34 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.503 00:17:34 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.503 00:17:34 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.503 00:17:34 -- paths/export.sh@5 -- $ export PATH 00:10:03.503 00:17:34 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.503 00:17:34 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:10:03.503 00:17:34 -- common/autobuild_common.sh@486 -- $ date +%s 00:10:03.503 00:17:34 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728425854.XXXXXX 00:10:03.503 00:17:34 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728425854.BEZ6ha 00:10:03.503 00:17:34 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:10:03.503 00:17:34 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:10:03.503 00:17:34 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/' 00:10:03.503 00:17:34 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp' 00:10:03.503 00:17:34 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:10:03.503 00:17:34 -- common/autobuild_common.sh@502 -- $ get_config_params 00:10:03.503 00:17:34 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:10:03.503 00:17:34 -- common/autotest_common.sh@10 -- $ set +x 00:10:03.762 00:17:34 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:10:03.762 00:17:34 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:10:03.762 00:17:34 -- pm/common@17 -- $ local monitor 00:10:03.762 00:17:34 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:10:03.762 00:17:34 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:10:03.762 00:17:34 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:10:03.762 00:17:34 -- pm/common@21 -- $ date +%s 00:10:03.762 00:17:34 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:10:03.762 00:17:34 -- pm/common@21 -- $ date +%s 00:10:03.762 00:17:34 -- pm/common@21 -- $ date +%s 00:10:03.762 00:17:34 -- pm/common@25 -- $ sleep 1 00:10:03.762 00:17:34 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728425854 00:10:03.762 00:17:34 -- pm/common@21 -- $ date +%s 00:10:03.762 00:17:34 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728425854 00:10:03.762 00:17:34 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728425854 00:10:03.762 00:17:34 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728425854 00:10:03.762 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728425854_collect-cpu-temp.pm.log 00:10:03.762 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728425854_collect-cpu-load.pm.log 00:10:03.762 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728425854_collect-vmstat.pm.log 00:10:03.762 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728425854_collect-bmc-pm.bmc.pm.log 00:10:04.696 00:17:35 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:10:04.696 00:17:35 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:10:04.696 00:17:35 -- spdk/autopackage.sh@14 -- $ timing_finish 00:10:04.696 00:17:35 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:10:04.696 00:17:35 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:10:04.696 00:17:35 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:10:04.696 00:17:35 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:10:04.696 00:17:35 -- pm/common@29 -- $ signal_monitor_resources TERM 00:10:04.696 00:17:35 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:10:04.696 00:17:35 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:10:04.696 00:17:35 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:10:04.696 00:17:35 -- pm/common@44 -- $ pid=3903517 00:10:04.696 00:17:35 -- pm/common@50 -- $ kill -TERM 3903517 00:10:04.696 00:17:35 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:10:04.696 00:17:35 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:10:04.696 00:17:35 -- pm/common@44 -- $ pid=3903519 00:10:04.696 00:17:35 -- pm/common@50 -- $ kill -TERM 3903519 00:10:04.696 00:17:35 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:10:04.696 00:17:35 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:10:04.696 00:17:35 -- pm/common@44 -- $ pid=3903521 00:10:04.696 00:17:35 -- pm/common@50 -- $ kill -TERM 3903521 00:10:04.696 00:17:35 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:10:04.696 00:17:35 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:10:04.696 00:17:35 -- pm/common@44 -- $ pid=3903547 00:10:04.696 00:17:35 -- pm/common@50 -- $ sudo -E kill -TERM 3903547 00:10:04.696 + [[ -n 3762952 ]] 00:10:04.696 + sudo kill 3762952 00:10:04.705 [Pipeline] } 00:10:04.719 [Pipeline] // stage 00:10:04.723 [Pipeline] } 00:10:04.737 [Pipeline] // timeout 00:10:04.742 [Pipeline] } 00:10:04.755 [Pipeline] // catchError 00:10:04.760 [Pipeline] } 00:10:04.773 [Pipeline] // wrap 00:10:04.779 [Pipeline] } 00:10:04.791 [Pipeline] // catchError 00:10:04.800 [Pipeline] stage 00:10:04.802 [Pipeline] { (Epilogue) 00:10:04.815 [Pipeline] catchError 00:10:04.817 [Pipeline] { 00:10:04.830 [Pipeline] echo 00:10:04.832 Cleanup processes 00:10:04.838 [Pipeline] sh 00:10:05.122 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:10:05.122 3903682 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/sdr.cache 00:10:05.122 3903915 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:10:05.134 [Pipeline] sh 00:10:05.413 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:10:05.413 ++ grep -v 'sudo pgrep' 00:10:05.413 ++ awk '{print $1}' 00:10:05.413 + sudo kill -9 3903682 00:10:05.423 [Pipeline] sh 00:10:05.726 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:10:17.944 [Pipeline] sh 00:10:18.271 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:10:18.271 Artifacts sizes are good 00:10:18.295 [Pipeline] archiveArtifacts 00:10:18.303 Archiving artifacts 00:10:18.428 [Pipeline] sh 00:10:18.712 + sudo chown -R sys_sgci: /var/jenkins/workspace/short-fuzz-phy-autotest 00:10:18.726 [Pipeline] cleanWs 00:10:18.735 [WS-CLEANUP] Deleting project workspace... 00:10:18.736 [WS-CLEANUP] Deferred wipeout is used... 00:10:18.742 [WS-CLEANUP] done 00:10:18.744 [Pipeline] } 00:10:18.761 [Pipeline] // catchError 00:10:18.772 [Pipeline] sh 00:10:19.054 + logger -p user.info -t JENKINS-CI 00:10:19.063 [Pipeline] } 00:10:19.076 [Pipeline] // stage 00:10:19.082 [Pipeline] } 00:10:19.095 [Pipeline] // node 00:10:19.100 [Pipeline] End of Pipeline 00:10:19.147 Finished: SUCCESS